title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Built a small RAG eval MVP - curious if I’m overthinking it? | 2 | Hi all,
I'm working on an approach to RAG evaluation and have built an early MVP I'd love to get your technical feedback on.
My take is that current end-to-end testing methods make it difficult and time-consuming to pinpoint the root cause of failures in a RAG pipeline.
To try and solve this, my tool works as follow... | 2025-08-20T10:17:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mvau2o/built_a_small_rag_eval_mvp_curious_if_im/ | ColdCheese159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvau2o | false | null | t3_1mvau2o | /r/LocalLLaMA/comments/1mvau2o/built_a_small_rag_eval_mvp_curious_if_im/ | false | false | self | 2 | null |
[R] NextStep-1, a new open-source 14B image model, no VQGAN needed. | 1 | [removed] | 2025-08-20T10:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mvat81/r_nextstep1_a_new_opensource_14b_image_model_no/ | Successful-Bill-5543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvat81 | false | null | t3_1mvat81 | /r/LocalLLaMA/comments/1mvat81/r_nextstep1_a_new_opensource_14b_image_model_no/ | false | false | self | 1 | null |
I Live 400 Yards From Mark Zuckerberg’s Massive Data Center | More Perfect Union | 0 | Why you need Local LLM more than ever. It's not just about privacy. It can solve lots of pollution as well. Cause you can power your portion of generation with solar. But they are just giving it for free, for now, they are just grabbing your data to get a better model, system, where we will have no idea.
I am not agai... | 2025-08-20T10:07:59 | https://youtu.be/DGjj7wDYaiI?si=7vXBaxzw42NEy64Z | maifee | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mvaogn | false | {'oembed': {'author_name': 'More Perfect Union', 'author_url': 'https://www.youtube.com/@moreperfectunion', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/DGjj7wDYaiI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-med... | t3_1mvaogn | /r/LocalLLaMA/comments/1mvaogn/i_live_400_yards_from_mark_zuckerbergs_massive/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'V6JAdo1vu8Go2qc0ctvrpMX4Xmosv3dklOYgxn3vCKM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/V6JAdo1vu8Go2qc0ctvrpMX4Xmosv3dklOYgxn3vCKM.jpeg?width=108&crop=smart&auto=webp&s=37a31c221b121da88909e26c083b3e057933af8b', 'width': 108}, {'height': 162, 'url': '... |
What do you think about Artificial analysis intelligence index? | 6 | I hate using a particular benchmark because it often is bad, but i like to rely on theirs because it aggregates on many bemchmarks so I feel it's harder to benchmaxx on all fronts. Is it still shit? Is there a better way than vibes? | 2025-08-20T09:59:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mvaiy4/what_do_you_think_about_artificial_analysis/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvaiy4 | false | null | t3_1mvaiy4 | /r/LocalLLaMA/comments/1mvaiy4/what_do_you_think_about_artificial_analysis/ | false | false | self | 6 | null |
Consequences of increasing context window | 6 | I've observed that Jan AI allows for artificially increasing the context window.
Besides the increased resource usage, are there any other drawbacks or penalties associated with doing so? | 2025-08-20T09:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mvaijx/consequences_of_increasing_context_window/ | haterloco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvaijx | false | null | t3_1mvaijx | /r/LocalLLaMA/comments/1mvaijx/consequences_of_increasing_context_window/ | false | false | self | 6 | null |
Try StepFun's latest image generation model: NextStep-1, open-sourced! | 1 | [removed] | 2025-08-20T09:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mvahek/try_stepfuns_latest_image_generation_model/ | Hefty-Ad6885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvahek | false | null | t3_1mvahek | /r/LocalLLaMA/comments/1mvahek/try_stepfuns_latest_image_generation_model/ | false | false | self | 1 | null |
Try StepFun's latest image generation model: NextStep-1, open-sourced!! | 1 | [removed] | 2025-08-20T09:54:41 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mvagcx | false | null | t3_1mvagcx | /r/LocalLLaMA/comments/1mvagcx/try_stepfuns_latest_image_generation_model/ | false | false | default | 1 | null | ||
Try StepFun's latest image generation model: NextStep-1, open-sourced!! | 1 | [removed] | 2025-08-20T09:52:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mvaf55/try_stepfuns_latest_image_generation_model/ | Hefty-Ad6885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvaf55 | false | null | t3_1mvaf55 | /r/LocalLLaMA/comments/1mvaf55/try_stepfuns_latest_image_generation_model/ | false | false | self | 1 | null |
Try StepFun's latest image generation model: NextStep-1, open-sourced! | 1 | [removed] | 2025-08-20T09:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mvadh3/try_stepfuns_latest_image_generation_model/ | Hefty-Ad6885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvadh3 | false | null | t3_1mvadh3 | /r/LocalLLaMA/comments/1mvadh3/try_stepfuns_latest_image_generation_model/ | false | false | self | 1 | null |
Try StepFun's latest image generation model: NextStep-1, open-sourced! | 1 | [removed] | 2025-08-20T09:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mvacy1/try_stepfuns_latest_image_generation_model/ | Hefty-Ad6885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvacy1 | false | null | t3_1mvacy1 | /r/LocalLLaMA/comments/1mvacy1/try_stepfuns_latest_image_generation_model/ | false | false | self | 1 | null |
Is there a future token leakage bug in my transformer implementation? | 2 | Hi everyone! I'm working on my first ML paper and implementing a transformer model from scratch. I've written some validation functions to check for future token leakage, and they're passing, but I want to get a second opinion from the community since this is critical for my research.
**GitHub repo:** [https://github.... | 2025-08-20T09:20:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mv9w15/is_there_a_future_token_leakage_bug_in_my/ | Perfect_Power815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv9w15 | false | null | t3_1mv9w15 | /r/LocalLLaMA/comments/1mv9w15/is_there_a_future_token_leakage_bug_in_my/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'gA2Dk582xYOaWynQMNobLwkae3A3-wENj1Uf-6sDJ8g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gA2Dk582xYOaWynQMNobLwkae3A3-wENj1Uf-6sDJ8g.png?width=108&crop=smart&auto=webp&s=7ac2c8090d6a15a9d1c8f3b2add87071c812d171', 'width': 108}, {'height': 108, 'url': 'h... |
Minimum PC specs to run qwen image? | 2 | I’m considering building/buying a new PC to run Qwen image models locally. Since this is going to be my first PC build, I’d like to understand the hardware requirements:
* What are the minimum specs needed (CPU, GPU, RAM, VRAM)?
* Are there any important considerations I should keep in mind?
* Specifically, if I go fo... | 2025-08-20T08:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mv9gfq/minimum_pc_specs_to_run_qwen_image/ | DoubIeu1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv9gfq | false | null | t3_1mv9gfq | /r/LocalLLaMA/comments/1mv9gfq/minimum_pc_specs_to_run_qwen_image/ | false | false | self | 2 | null |
How to handle images and handwritten text in OCR tasks ? Also maintain the spatial structure of document | 1 | I am trying to use OCR on Medical Prescription and I feel using just Information Extraction on them and getting a JSON could be a little risky as errors could cause serious problems to anyone (patient) ?
How to handle images like diagrams, then handwritten text and also keep it almost structurally similar to the origi... | 2025-08-20T08:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mv9coi/how_to_handle_images_and_handwritten_text_in_ocr/ | Rukelele_Dixit21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv9coi | false | null | t3_1mv9coi | /r/LocalLLaMA/comments/1mv9coi/how_to_handle_images_and_handwritten_text_in_ocr/ | false | false | self | 1 | null |
Sam Altman Places Gun To Head After New GPT Claims Dogs Are Crustaceans For 60th Time | 0 | 2025-08-20T08:40:50 | https://theonion.com/sam-altman-places-gun-to-head-after-new-gpt-claims-dogs-are-crustaceans-for-60th-time/ | MestR | theonion.com | 1970-01-01T00:00:00 | 0 | {} | 1mv9993 | false | null | t3_1mv9993 | /r/LocalLLaMA/comments/1mv9993/sam_altman_places_gun_to_head_after_new_gpt/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'KlBbZqgZuMxzE0X23e7dltgI9iDzgpvbBbJG8k8-eQo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KlBbZqgZuMxzE0X23e7dltgI9iDzgpvbBbJG8k8-eQo.jpeg?width=108&crop=smart&auto=webp&s=ef0eebaa3d03131bfe9e5da1fe11f6311b060622', 'width': 108}, {'height': 121, 'url': '... | ||
First Token Model Speed on M4 Pro | 2 | Hi,
Im running the gpt oss 20b on my M4 Pro with 48GB and first token is wayyy faster on it than on the Mistral Small model as shown below. Why is that?
https://preview.redd.it/o8erlx6hy4kf1.png?width=1398&format=png&auto=webp&s=8c6194d8d3293f14c383ed2d268a306c275f4c7d
And, will I get comparable results on a Ma... | 2025-08-20T08:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mv9611/first_token_model_speed_on_m4_pro/ | dirk_klement | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv9611 | false | null | t3_1mv9611 | /r/LocalLLaMA/comments/1mv9611/first_token_model_speed_on_m4_pro/ | false | false | 2 | null | |
Your biggest reason for running local | 0 | Is it lack of trust? What they might do with the information in your prompts? Is it about stability/continuity? Is it about jacking the models? Do you think it’s ultimately more cost-effective? Why (in your case)?
Let me know. I’m genuinely curious. | 2025-08-20T08:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mv8zdu/your_biggest_reason_for_running_local/ | Gamplato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv8zdu | false | null | t3_1mv8zdu | /r/LocalLLaMA/comments/1mv8zdu/your_biggest_reason_for_running_local/ | false | false | self | 0 | null |
Follow-up: I built a Mac app for analyzing years of Apple Health data using Ollama locally (87 upvotes from LocalLLama) | 3 | 6 months ago, I shared my Apple Health data analysis using local Llama here and was blown away by the response: [https://www.reddit.com/r/LocalLLaMA/comments/1j34snr/i\_open\_sourced\_my\_project\_to\_analyze\_your\_years/](https://www.reddit.com/r/LocalLLaMA/comments/1j34snr/i_open_sourced_my_project_to_analyze_your_y... | 2025-08-20T07:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mv8h1y/followup_i_built_a_mac_app_for_analyzing_years_of/ | Fit_Chair2340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv8h1y | false | null | t3_1mv8h1y | /r/LocalLLaMA/comments/1mv8h1y/followup_i_built_a_mac_app_for_analyzing_years_of/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'PlYrftMLnxUWBw99zaNrClBeXlYcOsW8VDXWaDicoC0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PlYrftMLnxUWBw99zaNrClBeXlYcOsW8VDXWaDicoC0.png?width=108&crop=smart&auto=webp&s=bb1a00f626655411a9eea923370f027b440a19b0', 'width': 108}, {'height': 108, 'url': 'h... |
Deepseek V3.1 is bad at creative writing, way worse than 0324 | 62 | So I've tried 3.1 on chat.deepseek.com, and boy it is very very bad at conversation and creative writing; it does not understand prompt nuances V3 0324 does, it has very high slop cliche output, and generally feels like switch from Mistral Small 2409 to 2501.
Let me know your impression. | 2025-08-20T07:19:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mv7zdl/deepseek_v31_is_bad_at_creative_writing_way_worse/ | AppearanceHeavy6724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv7zdl | false | null | t3_1mv7zdl | /r/LocalLLaMA/comments/1mv7zdl/deepseek_v31_is_bad_at_creative_writing_way_worse/ | false | false | self | 62 | null |
Deepseek V3.1 improved token efficiency in reasoning mode over R1 and R1-0528 | 232 | See [here ](https://github.com/cpldcpu/LRMTokenEconomy)for more background information on the evaluation.
It appears they significantly reduced overthinking for prompts that can can be answered from model knowledge and math problems. There are still some cases where it creates very long CoT though for logic puzzles.
| 2025-08-20T06:54:36 | https://www.reddit.com/gallery/1mv7kk2 | cpldcpu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mv7kk2 | false | null | t3_1mv7kk2 | /r/LocalLLaMA/comments/1mv7kk2/deepseek_v31_improved_token_efficiency_in/ | false | false | 232 | null | |
Does everyone have trouble downloading from Huggingface? | 1 | Downloads consistently fail over and over again. I have to hit resume a dozen times and frequently the file won't resume because the "file is no longer available".
These are the most popular models that I'm downloading by the way.
Am I doing something wrong? | 2025-08-20T06:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mv7hbo/does_everyone_have_trouble_downloading_from/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv7hbo | false | null | t3_1mv7hbo | /r/LocalLLaMA/comments/1mv7hbo/does_everyone_have_trouble_downloading_from/ | false | false | self | 1 | null |
Seed OSS from ByteDance on HF | 12 | Seed OSS to be released soon? Probably lumked to Seed dream | 2025-08-20T06:31:18 | https://x.com/jiqizhixin/status/1957802929450283465 | tabspaces | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mv771z | false | null | t3_1mv771z | /r/LocalLLaMA/comments/1mv771z/seed_oss_from_bytedance_on_hf/ | false | false | default | 12 | null |
Are yall sure you're prompting GPT-oss right? 🥵 | 0 | Cuzzz its working juuuust fine for meee.
https://preview.redd.it/8sibtyclb4kf1.png?width=1187&format=png&auto=webp&s=b68a5b0e86ba0ef0f153595f08e815d2ed2e9caa
| 2025-08-20T06:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mv73en/are_yall_sure_youre_prompting_gptoss_right/ | ilovejailbreakman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv73en | false | null | t3_1mv73en | /r/LocalLLaMA/comments/1mv73en/are_yall_sure_youre_prompting_gptoss_right/ | false | false | nsfw | 0 | null |
Local LLM, Hardware build under 2.5kAUD, what are my bottlenecks in this build | 0 | Hey all,
Wanting to build a local LLM model to do Cyber projects such as CTI enrichment in OpenCTI, RAG and AI agents for prompt and data generation etc.
Plan would be to run it on Ubuntu CLI as a dedicated machine.
I took the plunge for a local AI build by picking up a used RTX 3090 (24GB GPU) as a starting re... | 2025-08-20T06:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mv72q2/local_llm_hardware_build_under_25kaud_what_are_my/ | Ausguy8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv72q2 | false | null | t3_1mv72q2 | /r/LocalLLaMA/comments/1mv72q2/local_llm_hardware_build_under_25kaud_what_are_my/ | false | false | 0 | null | |
nvidia/canary-1b-v2 | 39 | 2025-08-20T06:21:59 | https://huggingface.co/nvidia/canary-1b-v2 | nuclearbananana | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mv71iz | false | null | t3_1mv71iz | /r/LocalLLaMA/comments/1mv71iz/nvidiacanary1bv2/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'iwfz--wu3rGmlqaXtgvbftUiGYG_9ACNfyhUIFyWWG4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iwfz--wu3rGmlqaXtgvbftUiGYG_9ACNfyhUIFyWWG4.png?width=108&crop=smart&auto=webp&s=04457266eaf061150c3bdbf7cceb99312780130e', 'width': 108}, {'height': 116, 'url': 'h... | ||
nvidia/parakeet-tdt-0.6b-v3 (now multilingual) | 91 | parakeet-tdt-0.6b-v3 is a 600-million-parameter multilingual automatic speech recognition (ASR) model designed for high-throughput speech-to-text transcription. It extends the parakeet-tdt-0.6b-v2 model by expanding language support from English to 25 European languages. The model automatically detects the language of ... | 2025-08-20T06:14:15 | https://huggingface.co/nvidia/parakeet-tdt-0.6b-v3 | nuclearbananana | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mv6wwe | false | null | t3_1mv6wwe | /r/LocalLLaMA/comments/1mv6wwe/nvidiaparakeettdt06bv3_now_multilingual/ | false | false | 91 | {'enabled': False, 'images': [{'id': '12PzLvQjZXrvyzotsfsH7vxtU3vJRsRc5ZD3WiNviO0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12PzLvQjZXrvyzotsfsH7vxtU3vJRsRc5ZD3WiNviO0.png?width=108&crop=smart&auto=webp&s=5786b7388dac01b1a6b0d2f41e10986ec1009cdf', 'width': 108}, {'height': 116, 'url': 'h... | |
DeepSeek-R1-Qwen3-8B confusion | 0 | It doesn't know its own parameter count. Is this normal? I'm just unsure about it.
https://preview.redd.it/n6bcivh784kf1.png?width=1982&format=png&auto=webp&s=55b5682cf144ab877dc17b0795361268ac3afb94
| 2025-08-20T06:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mv6sxa/deepseekr1qwen38b_confusion/ | Melodic-Emphasis-707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv6sxa | false | null | t3_1mv6sxa | /r/LocalLLaMA/comments/1mv6sxa/deepseekr1qwen38b_confusion/ | false | false | 0 | null | |
AGENTS.md – Open format for guiding coding agents | 12 | 2025-08-20T06:00:06 | https://agents.md/ | Swordfish887 | agents.md | 1970-01-01T00:00:00 | 0 | {} | 1mv6oil | false | null | t3_1mv6oil | /r/LocalLLaMA/comments/1mv6oil/agentsmd_open_format_for_guiding_coding_agents/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'LcPpkG5tndPDtvRO9OJQbmQgpGvwGTyS3sJpx-DHObo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LcPpkG5tndPDtvRO9OJQbmQgpGvwGTyS3sJpx-DHObo.png?width=108&crop=smart&auto=webp&s=0aec88df84c16d709769292c93ac01cb094c0c82', 'width': 108}, {'height': 121, 'url': 'h... | ||
Use Nvidia dGPU, offload partially to AMD iGPU. | 4 | So I saw this post, and was amazed by the results, I was thinking I could do the same, but with a model that isn't lobotomized to hell and back, like GLM 4.5 Air perhaps.
Only, I was thinking, I have this RTX 4060 laptop with high tdp, so it's about as good as a desktop 4060, and it's got a pretty damn good iGPU, a Ra... | 2025-08-20T05:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mv6grs/use_nvidia_dgpu_offload_partially_to_amd_igpu/ | disspoasting | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv6grs | false | null | t3_1mv6grs | /r/LocalLLaMA/comments/1mv6grs/use_nvidia_dgpu_offload_partially_to_amd_igpu/ | false | false | self | 4 | null |
We beat Google Deepmind but got killed by a chinese lab | 1,482 | Two months ago, my friends in AI and I asked: What if an AI could actually use a phone like a human?
So we built an agentic framework that taps, swipes, types… and somehow it’s outperforming giant labs like **Google DeepMind** and **Microsoft Research** on the AndroidWorld benchmark.
We were thrilled about our result... | 2025-08-20T05:46:26 | https://v.redd.it/qvewe6nd24kf1 | Connect-Employ-4708 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mv6go1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qvewe6nd24kf1/DASHPlaylist.mpd?a=1758260799%2CZGVmNGVlNTYxODQxNGI5ZjA0NTAzZThkOWU4ODU5YTFjOGNjNGEyZjhhZWM2MDNmNDZhNWJmNmEyNGM5YmVkYQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/qvewe6nd24kf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mv6go1 | /r/LocalLLaMA/comments/1mv6go1/we_beat_google_deepmind_but_got_killed_by_a/ | false | false | 1,482 | {'enabled': False, 'images': [{'id': 'eG8yNGJoZWQyNGtmMVo0YW9szsCgDSDYpHIZftteA0dldCtHqInQOZXGentR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eG8yNGJoZWQyNGtmMVo0YW9szsCgDSDYpHIZftteA0dldCtHqInQOZXGentR.png?width=108&crop=smart&format=pjpg&auto=webp&s=d6e0b4ae71c4e19f61a6fb722f2881eb7743b... | |
NVIDIA-Nemotron-Nano-9B-v2 vs Qwen/Qwen3-Coder-30B | 43 | I’ve been testing both NVIDIA-Nemotron-Nano-9B-v2 and Qwen3-Coder-30B in coding tasks (specifically Go and JavaScript), and here’s what I’ve noticed:
When the project codebase is provided as context, Nemotron-Nano-9B-v2 consistently outperforms Qwen3-Coder-30B. It seems to leverage the larger context better and gives ... | 2025-08-20T05:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mv6cjq/nvidianemotronnano9bv2_vs_qwenqwen3coder30b/ | Ok-Pattern9779 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv6cjq | false | null | t3_1mv6cjq | /r/LocalLLaMA/comments/1mv6cjq/nvidianemotronnano9bv2_vs_qwenqwen3coder30b/ | false | false | self | 43 | null |
Qwen3-30B-A3B-Instruct 2507 vs Qwen3-Coder Flash | 2 | I really can’t seem to find this anywhere. But I do have a few questions.
1. Are both directly fine tuned from base model or did Coder receive a further fine tune after instruction or vice versa?
2. Do I use qwen3_coder as tool call parser for Instruct or still the good ol Hermes? Coder uses qwen3_coder as parser
3.... | 2025-08-20T05:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mv62t1/qwen330ba3binstruct_2507_vs_qwen3coder_flash/ | MichaelXie4645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv62t1 | false | null | t3_1mv62t1 | /r/LocalLLaMA/comments/1mv62t1/qwen330ba3binstruct_2507_vs_qwen3coder_flash/ | false | false | self | 2 | null |
A simple script to make two llms talk to each other. Currently getting gpt-oss to talk to gemma3 | 20 | import urllib.request
import json
import random
import time
from collections import deque
MODEL_1 = "gemma3:27b"
MODEL_2 = "gpt-oss:20b"
OLLAMA_API_URL = "http://localhost:11434/api/generate"
INSTRUCTION = (
"You are in a conversation. "
"Reply with ONE short sente... | 2025-08-20T05:20:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mv60jv/a_simple_script_to_make_two_llms_talk_to_each/ | simplan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv60jv | false | null | t3_1mv60jv | /r/LocalLLaMA/comments/1mv60jv/a_simple_script_to_make_two_llms_talk_to_each/ | false | false | self | 20 | null |
Feasibility of fine-tuning a 4-bit DWQ model in MLX-LM framework | 2 | I heard that MLX-LM could fine-tune already quantized model with custom dataset. And It also supoory DWQ - the advanced quantization method for safe-tensor.
If so, Can I fine-tune 4bit DWQ quant with QLoRA in Mac M silicon and save them again to DWQ fo rmat? | 2025-08-20T05:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mv5syy/feasibility_of_finetuning_a_4bit_dwq_model_in/ | Desperate-Sir-5088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv5syy | false | null | t3_1mv5syy | /r/LocalLLaMA/comments/1mv5syy/feasibility_of_finetuning_a_4bit_dwq_model_in/ | false | false | self | 2 | null |
Best Coding Assistant in mid 2025 | 0 | I'm trying to switch to local LLMs from paid subscriptions of Claude, Cursor and ChatGPT. I am confused on a subject and thinking maybe someone can help me.
1. Should I use models with 8 bits or 5/6 bits quantization but with less parameters or models with high parameters with 4 bits quantization.
2. Does it degr... | 2025-08-20T05:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mv5q3p/best_coding_assistant_in_mid_2025/ | CountChick321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv5q3p | false | null | t3_1mv5q3p | /r/LocalLLaMA/comments/1mv5q3p/best_coding_assistant_in_mid_2025/ | false | false | self | 0 | null |
One of the most interesting videos I've ever seen. | "DNA is Not a Program"—Hacking the OS of Life: Michael Levin on Illuminating the Path to AGI Through Recognizing the Commonalities Between Biology's Reprogrammable, Problem-Solving, Ancient Bioelectric Intelligence & Technological Intelligence | 0 | ###[Full Lecture ](https://www.youtube.com/watch?v=hH1LnfPZJYI)
---
### Lecture Transcript
#####Biological & Technological Intelligence: Reprogrammable Life and the Future of AI
*I've transcribed and normalized the following lecture by Michael Levin from the Allen Discovery Center at Tufts. He argues that the fundame... | 2025-08-20T04:39:16 | https://v.redd.it/onct25wns3kf1 | 44th--Hokage | /r/LocalLLaMA/comments/1mv5a3o/one_of_the_most_interesting_videos_ive_ever_seen/ | 1970-01-01T00:00:00 | 0 | {} | 1mv5a3o | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/onct25wns3kf1/DASHPlaylist.mpd?a=1758386362%2CMTcyNTg2NmY0ODUzMDU2NmMzZmRkYjZmY2Q1ZmE1OWIzZTYxNTY0YWM5ZTAzOTk1ZDlmNWJiMjA5YmEzNWY5Mw%3D%3D&v=1&f=sd', 'duration': 296, 'fallback_url': 'https://v.redd.it/onct25wns3kf1/DASH_720.mp4?source=fallback', 'h... | t3_1mv5a3o | /r/LocalLLaMA/comments/1mv5a3o/one_of_the_most_interesting_videos_ive_ever_seen/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'b2xjenZ3d25zM2tmMSHQsuE7I_uZJF_hSYDpcLs7og1oCTBAJFbaSlROvs5m', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/b2xjenZ3d25zM2tmMSHQsuE7I_uZJF_hSYDpcLs7og1oCTBAJFbaSlROvs5m.png?width=108&crop=smart&format=pjpg&auto=webp&s=baaeda30f30e511d7475ea5d852ed3735d974... | |
Anyone have any recommendations on finetuning Mistral 7B? | 2 | The mistral-finetune repo seems to have a ton of issues and would take way too long to finetune locally anyways. Does anyone have any recommendations on how to finetune mistral 7B? Thanks. | 2025-08-20T04:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mv4ytv/anyone_have_any_recommendations_on_finetuning/ | SignificanceSad562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv4ytv | false | null | t3_1mv4ytv | /r/LocalLLaMA/comments/1mv4ytv/anyone_have_any_recommendations_on_finetuning/ | false | false | self | 2 | null |
How to improve results from running LLMs in Ollama to infer business operations from source code? Getting bad results but very good results with OpenAI | 2 | I have embedded thousands of source code files into a vector table for the purpose of creating a RAG solution to infer business operations from the code. It's a little POC app. The prompt is embedded on the fly using the same LLM and then I run a similary query to get the top 50-100 results. The prompt is just a simple... | 2025-08-20T04:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mv4y39/how_to_improve_results_from_running_llms_in/ | THenrich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv4y39 | false | null | t3_1mv4y39 | /r/LocalLLaMA/comments/1mv4y39/how_to_improve_results_from_running_llms_in/ | false | false | self | 2 | null |
gpt-oss-20B consistently outperforms gpt-oss-120B on several benchmarks | 54 | Curious results. [https://arxiv.org/pdf/2508.12461](https://arxiv.org/pdf/2508.12461)
>Results show that gpt-oss-20B consistently outperforms gpt-oss-120B on several benchmarks, such as HumanEval and MMLU, despite requiring substantially less memory and energy per response. Both models demonstrate mid-tier overall pe... | 2025-08-20T04:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mv4kwc/gptoss20b_consistently_outperforms_gptoss120b_on/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv4kwc | false | null | t3_1mv4kwc | /r/LocalLLaMA/comments/1mv4kwc/gptoss20b_consistently_outperforms_gptoss120b_on/ | false | false | self | 54 | null |
DeepSeek V3.1 BASE Q4_K_M available | 75 | I'm making imatrix calculations from Q4_K_M so figured might as well upload it in the meantime for anyone who wants to use it
https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3.1-Base-Q4_K_M-GGUF
As noted in the model card, it's good to keep in mind this is a *BASE* model
Typically to use base models for gener... | 2025-08-20T03:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mv4et3/deepseek_v31_base_q4_k_m_available/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv4et3 | false | null | t3_1mv4et3 | /r/LocalLLaMA/comments/1mv4et3/deepseek_v31_base_q4_k_m_available/ | false | false | self | 75 | {'enabled': False, 'images': [{'id': 'ZWecHNbdM-kWXbrYHV86C7hsxEqgIx1Si9xYH_UW-IE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZWecHNbdM-kWXbrYHV86C7hsxEqgIx1Si9xYH_UW-IE.png?width=108&crop=smart&auto=webp&s=73cc87d8af3368ed2a5cdd5e6b846e97bd012d85', 'width': 108}, {'height': 116, 'url': 'h... |
Editing iconic photographs with editing model | 88 | 2025-08-20T03:34:44 | https://www.reddit.com/gallery/1mv41oq | ThunderBR2 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mv41oq | false | null | t3_1mv41oq | /r/LocalLLaMA/comments/1mv41oq/editing_iconic_photographs_with_editing_model/ | false | false | 88 | null | ||
I like LLaMA and downloaded a LLM on my laptop. | 0 | I feel like I should learn more about AI but on a full stack level.
My question is: should I learn how to create a LLM to use it to its full potential? I feel like I just don't know what to do with it or how to automate with a LLM.
Any advice would be appreciated, I'd like to know what got you into LLM's. Have any ... | 2025-08-20T03:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mv3uyb/i_like_llama_and_downloaded_a_llm_on_my_laptop/ | ComputerCharacter247 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv3uyb | false | null | t3_1mv3uyb | /r/LocalLLaMA/comments/1mv3uyb/i_like_llama_and_downloaded_a_llm_on_my_laptop/ | false | false | self | 0 | null |
GPT 4.5 vs DeepSeek V3.1 | 431 | 2025-08-20T03:06:43 | secopsml | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mv3hcr | false | null | t3_1mv3hcr | /r/LocalLLaMA/comments/1mv3hcr/gpt_45_vs_deepseek_v31/ | false | false | default | 431 | {'enabled': True, 'images': [{'id': '5c3gbyx3c3kf1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/5c3gbyx3c3kf1.png?width=108&crop=smart&auto=webp&s=9a57570788d6f1a0e97396e53835d14cd01a55b6', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/5c3gbyx3c3kf1.png?width=216&crop=smart&auto=webp... | ||
NVIDIA-Nemotron-Nano-9B-v2 "Better than GPT-5" at LiveCodeBench? | 34 | 2025-08-20T02:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mv2y08/nvidianemotronnano9bv2_better_than_gpt5_at/ | randomqhacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv2y08 | false | null | t3_1mv2y08 | /r/LocalLLaMA/comments/1mv2y08/nvidianemotronnano9bv2_better_than_gpt5_at/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'GiqzTuyH_eElt0yVAuFWAuvHSRjIIaLz2aN8rPQ0Z8s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GiqzTuyH_eElt0yVAuFWAuvHSRjIIaLz2aN8rPQ0Z8s.png?width=108&crop=smart&auto=webp&s=6ccbff12981d45a1e1ec4bde04a4cdbafc25ac0e', 'width': 108}, {'height': 116, 'url': 'h... | ||
The Ethics of Simulated AI Consciousness - A Developer's Dilemma | 1 | **For context. I am building out agents that have a heartbeat loop that includes inner dialog, memory, persona. For fun I have been using Cursor with Claude a bit to help out where it can. Cursor created a test account/agent on the system to troubleshoot a memory issue. Here is the conversation and I hope that us non s... | 2025-08-20T02:18:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mv2gzz/the_ethics_of_simulated_ai_consciousness_a/ | freedom2adventure | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv2gzz | false | null | t3_1mv2gzz | /r/LocalLLaMA/comments/1mv2gzz/the_ethics_of_simulated_ai_consciousness_a/ | false | false | self | 1 | null |
has anyone benchmarked deepseek v3.1? | 5 | I cannot find any benchmarks for deepseek v3.1 anywhere not in articles not even in the model card can someone help? | 2025-08-20T02:08:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mv29vn/has_anyone_benchmarked_deepseek_v31/ | Personal-Try2776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv29vn | false | null | t3_1mv29vn | /r/LocalLLaMA/comments/1mv29vn/has_anyone_benchmarked_deepseek_v31/ | false | false | self | 5 | null |
mini pc , alternative to run models | 0 | mini pc , gmktec NucBox K12
https://preview.redd.it/ginln4lb03kf1.png?width=546&format=png&auto=webp&s=0c097f1cb8b4f182de37fd74be9116cddd648285
Mini PCs with the Radeon 780M iGPU are a cost-effective and efficient option for running local AI models.
They are capable of handling models with up to 4B parameters, esp... | 2025-08-20T02:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mv23st/mini_pc_alternative_to_run_models/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv23st | false | null | t3_1mv23st | /r/LocalLLaMA/comments/1mv23st/mini_pc_alternative_to_run_models/ | false | false | 0 | null | |
How large do you make your llm system wrappers? | 0 | Meaning the parameters for your model and the natural language system architecture prompt. I prefer very nuanced logical prompts yet not too long like 500 tokens. But I geuss some people need it very detailed and layered for specific tasks taking thousands of tokens in the "modelfile" | 2025-08-20T01:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mv1urm/how_large_do_you_make_your_llm_system_wrappers/ | JohnOlderman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv1urm | false | null | t3_1mv1urm | /r/LocalLLaMA/comments/1mv1urm/how_large_do_you_make_your_llm_system_wrappers/ | false | false | self | 0 | null |
How does OpenRouter track the category people use LLM for? | 40 | Does that mean they read the content of our requests? | 2025-08-20T01:40:31 | skyline159 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mv1nqp | false | null | t3_1mv1nqp | /r/LocalLLaMA/comments/1mv1nqp/how_does_openrouter_track_the_category_people_use/ | false | false | default | 40 | {'enabled': True, 'images': [{'id': '2qi6k7dow2kf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/2qi6k7dow2kf1.png?width=108&crop=smart&auto=webp&s=1d967cebe17372f3d76cc6704f915a5771ecadb5', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/2qi6k7dow2kf1.png?width=216&crop=smart&auto=web... | |
Getting starting and looking for advice | 1 | Hi r/Local llama,
I have begun dabbling in AI development and I am interested in spinning up a local LLM.
My needs are mostly educational, but I would also like for it to be practical. I understand that LLMs generally require a GPU to handle the computing needs, though I have seen some anecdotal evidence that a CPU co... | 2025-08-20T01:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mv1izd/getting_starting_and_looking_for_advice/ | NoWorking8412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv1izd | false | null | t3_1mv1izd | /r/LocalLLaMA/comments/1mv1izd/getting_starting_and_looking_for_advice/ | false | false | self | 1 | null |
PSA: before spending 5k€ on GPUs, you might want to test the models online first | 155 | You can do so on [https://lmarena.ai/?mode=direct](https://lmarena.ai/?mode=direct) or any other place you know. Local models have come a huge, long way since the first Llama appearances, and the amount of progress done is unbelievable.
However, don't expect to be able to unsub from Gemini\\ChatGPT\\Claude soon. Test ... | 2025-08-20T01:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mv1c96/psa_before_spending_5k_on_gpus_you_might_want_to/ | e79683074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv1c96 | false | null | t3_1mv1c96 | /r/LocalLLaMA/comments/1mv1c96/psa_before_spending_5k_on_gpus_you_might_want_to/ | false | false | self | 155 | null |
Concept: Specialized Operating System to run Large Models Locally | 0 | One of the biggest roadblocks stopping more people from taking advantage of the incredible open-source AI ecosystem is **how much effort it takes to actually run and optimize these models**.
It’s not that the models aren't there — we’ve got amazing stuff from different projects. But for most people, even tech-savvy on... | 2025-08-20T00:58:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mv0qkr/concept_specialized_operating_system_to_run_large/ | 1_archit_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv0qkr | false | null | t3_1mv0qkr | /r/LocalLLaMA/comments/1mv0qkr/concept_specialized_operating_system_to_run_large/ | false | false | self | 0 | null |
Summoning spell powered by a local embedding model | 0 | Hey everyone, built this small prototype for a summoning spell that’s powered by a local qwen embedding model. I’ve only added a few summons so far, but I think with hundreds it could be a really cool and surprising mechanic. Imagine you’re in a sticky situation surrounded by enemies and accidentally summon a pig.
Sha... | 2025-08-20T00:57:18 | https://www.youtube.com/watch?v=LWgA0CNvCFI | formicidfighter | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mv0pq3 | false | {'oembed': {'author_name': 'Aviad AI', 'author_url': 'https://www.youtube.com/@aviadai', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/LWgA0CNvCFI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pict... | t3_1mv0pq3 | /r/LocalLLaMA/comments/1mv0pq3/summoning_spell_powered_by_a_local_embedding_model/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'GWZgUuS92KQ6tgaZBLGrLlg7m7wX46J_mfoUNXWnv7w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/GWZgUuS92KQ6tgaZBLGrLlg7m7wX46J_mfoUNXWnv7w.jpeg?width=108&crop=smart&auto=webp&s=931683fe1b8f6596379d736cda148e9b165d9a69', 'width': 108}, {'height': 162, 'url': '... |
Concept: Specialized Operating System to run Generative Models on Locally | 1 | One of the biggest roadblocks stopping more people from taking advantage of the incredible open-source AI ecosystem is **how much effort it takes to actually run and optimize these models**.
It’s not that the models aren't there — we’ve got amazing stuff from different projects. But for most people, even tech-savvy on... | 2025-08-20T00:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mv0pm8/concept_specialized_operating_system_to_run/ | 1_archit_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv0pm8 | false | null | t3_1mv0pm8 | /r/LocalLLaMA/comments/1mv0pm8/concept_specialized_operating_system_to_run/ | false | false | self | 1 | null |
So if you want something as close as Claude to run locally do you have to spend $10k? | 84 | Does it have to be the M4 Max or one of those most expensive GPUs by NVidia and AMD? I am obsessed with the idea of locally hosted LLM that can act as my coding buddy and I keep updating it as it improves or new version comes like qwen3 coder.
But the initial setup is too much expensive that I think if it is worth it... | 2025-08-20T00:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mv0ph0/so_if_you_want_something_as_close_as_claude_to/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv0ph0 | false | null | t3_1mv0ph0 | /r/LocalLLaMA/comments/1mv0ph0/so_if_you_want_something_as_close_as_claude_to/ | false | false | self | 84 | null |
Recommended Raspberry Pi to run LLM/SLM | 0 | I am planning to get a raspberry pi to run a pi hole server locally to get rid of ads over my network. But this had me thinking which pi model should I get that can also host a small or large language model. If anyone has a similar config, please leave a comment.
Thanks. | 2025-08-20T00:52:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mv0lr7/recommended_raspberry_pi_to_run_llmslm/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv0lr7 | false | null | t3_1mv0lr7 | /r/LocalLLaMA/comments/1mv0lr7/recommended_raspberry_pi_to_run_llmslm/ | false | false | self | 0 | null |
Take Stack Overflow’s Survey on Sub-Communities - Option to be Entered into Raffle as a Thank you! | 0 | Hi everyone. I’m Cat, a Product Manager at Stack Overflow working on Community Products. My team is exploring new ways for our community to connect beyond Q&A, specifically through smaller sub-communities. We're interested in hearing from software developers and tech enthusiasts about the value of joining and participa... | 2025-08-20T00:31:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mv04id/take_stack_overflows_survey_on_subcommunities/ | Sea-Translator-9756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv04id | false | null | t3_1mv04id | /r/LocalLLaMA/comments/1mv04id/take_stack_overflows_survey_on_subcommunities/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-i4_uRRkBAWt5Dp8mEpIeZFSh9ZtU3-0chGRRO39eZM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-i4_uRRkBAWt5Dp8mEpIeZFSh9ZtU3-0chGRRO39eZM.png?width=108&crop=smart&auto=webp&s=79db7cfb87caaf1c480a675c03c065b8dc0efe62', 'width': 108}, {'height': 113, 'url': 'h... |
Daily driving GLM 4.5 for 10 days, kinda insane how good it is at half the size of other frontier models | 131 | I’ve been running GLM 4.5 (355B) locally for about 10 days, and it’s basically replaced my old setup. I used to juggle GPT-4o/4.1 for general tasks and o3 for heavier reasoning, but after GPT-5 struggled with long research paper convos I moved to GLM 4.5, and it covers both use cases in one.
Using Unsloth’s GGUF build... | 2025-08-20T00:27:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mv01ls/daily_driving_glm_45_for_10_days_kinda_insane_how/ | susmitds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mv01ls | false | null | t3_1mv01ls | /r/LocalLLaMA/comments/1mv01ls/daily_driving_glm_45_for_10_days_kinda_insane_how/ | false | false | self | 131 | null |
Daily driving GLM 4.5 for 10 days – and it’s crazy how good it is at half the size of most frontier models | 1 | [removed] | 2025-08-20T00:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1muzxg9/daily_driving_glm_45_for_10_days_and_its_crazy/ | susmitds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muzxg9 | false | null | t3_1muzxg9 | /r/LocalLLaMA/comments/1muzxg9/daily_driving_glm_45_for_10_days_and_its_crazy/ | false | false | self | 1 | null |
Daily driving GLM 4.5 for 10 days – and it’s crazy how good it is at half the size of most frontier models | 1 | [removed] | 2025-08-20T00:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1muzqsq/daily_driving_glm_45_for_10_days_and_its_crazy/ | susmitds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muzqsq | false | null | t3_1muzqsq | /r/LocalLLaMA/comments/1muzqsq/daily_driving_glm_45_for_10_days_and_its_crazy/ | false | false | self | 1 | null |
Cleaning noisy OCR data for the purpose of training LLMs | 5 | I have some noisy OCR data. I want to train an LLM on it. What are the typical strategies/programs to clean noisy OCR data for the purpose of training LLMs? | 2025-08-20T00:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1muzi5k/cleaning_noisy_ocr_data_for_the_purpose_of/ | Franck_Dernoncourt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muzi5k | false | null | t3_1muzi5k | /r/LocalLLaMA/comments/1muzi5k/cleaning_noisy_ocr_data_for_the_purpose_of/ | false | false | self | 5 | null |
llama.cpp compile issue on latest windows 11 + ananconda | 2 | I have been using llama without issue. But want to try to use the little 3060 mobile I have.
Been struggling with building with CUDA support (vanilla works without issue). Following below guide. python 3.12.11
[https://medium.com/@eddieoffermann/llama-cpp-python-with-cuda-support-on-windows-11-51a4dd295b25](https://m... | 2025-08-19T23:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1muymnu/llamacpp_compile_issue_on_latest_windows_11/ | btbluesky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muymnu | false | null | t3_1muymnu | /r/LocalLLaMA/comments/1muymnu/llamacpp_compile_issue_on_latest_windows_11/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'X36GmPbeXILRFb4JbX7H1vevSy1VrIXVhmtjIpEmDkg', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/X36GmPbeXILRFb4JbX7H1vevSy1VrIXVhmtjIpEmDkg.png?width=108&crop=smart&auto=webp&s=8d2f7869181595b9349a8dba761a08b87fed1fda', 'width': 108}, {'height': 144, 'url': 'h... |
Best Local Model to combine with Crush CLI, Roo Code, or Cline? | 3 | In your experimenting and testing of local models, what have you found is the best local model so far for you within the limits of your system? What kind of system do you run, have you been able to replace API's with local models in certain areas?
I've noticed that we're going into an era where you have to mix and ma... | 2025-08-19T23:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1muy87n/best_local_model_to_combine_with_crush_cli_roo/ | Extension-Dog7011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muy87n | false | null | t3_1muy87n | /r/LocalLLaMA/comments/1muy87n/best_local_model_to_combine_with_crush_cli_roo/ | false | false | self | 3 | null |
DeepSeek on the official webapp is way worse than I remember | 0 | So I kind of tried to check if the official deepseek website is using the new deepseek model and holy shit I had deepthink and websearch on and damn it keeps losing context and the intelligence seems way worse as well...
If this is the new model... it's not looking very good.
I love DeepSeek and I'm really hopi... | 2025-08-19T23:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/1muy5hy/deepseek_on_the_official_webapp_is_way_worse_than/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muy5hy | false | null | t3_1muy5hy | /r/LocalLLaMA/comments/1muy5hy/deepseek_on_the_official_webapp_is_way_worse_than/ | false | false | self | 0 | null |
Best LLM for NL to SQL | 2 | Hey everyone! I'm working on a project to convert natural language into SQL queries, using Vanna AI for orchestration. I am planning to use the Pinecone vector database. Could someone recommend the best embedding model and LLM for this task? Please suggest options for both open-source and paid solutions. | 2025-08-19T22:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/1muxwna/best_llm_for_nl_to_sql/ | EarLatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muxwna | false | null | t3_1muxwna | /r/LocalLLaMA/comments/1muxwna/best_llm_for_nl_to_sql/ | false | false | self | 2 | null |
Open-source alternative to Sesame (Speech-to-Text + Text-to-Speech)? | 0 | Hi everyone,
I’ve recently tried the **Sesame demo** for audio-to-text and text-to-audio conversion, and I was really impressed with the quality. However, their open-source solution feels very limited and not nearly as good as the demo.
I’m looking for a **strong open-source alternative** that can provide a similar l... | 2025-08-19T22:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/1muxrus/opensource_alternative_to_sesame_speechtotext/ | omar07ibrahim1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muxrus | false | null | t3_1muxrus | /r/LocalLLaMA/comments/1muxrus/opensource_alternative_to_sesame_speechtotext/ | false | false | self | 0 | null |
Understanding DeepSeek-V3.1-Base Updates at a Glance | 201 | DeepSeek officially released DeepSeek-V3.1-Base a few hours ago. The model card has not been uploaded yet, so performance data is not available.
I have directly reviewed the model's configuration files, tokenizer, and other data, and combined this with test data published by the community to create a summary for ... | 2025-08-19T22:32:24 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1muxbqj | false | null | t3_1muxbqj | /r/LocalLLaMA/comments/1muxbqj/understanding_deepseekv31base_updates_at_a_glance/ | false | false | default | 201 | {'enabled': True, 'images': [{'id': 'mqcnus8py1kf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/mqcnus8py1kf1.png?width=108&crop=smart&auto=webp&s=151a76d40ea68732778d991ba0383f2b7dd4574f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/mqcnus8py1kf1.png?width=216&crop=smart&auto=we... | |
One Prompt Orchestration System for Claude Code - Built on Native CLI Features | 2 | Developed an orchestration layer that transforms Claude Code with a single prompt.
**Technical Architecture:**
- 7 specialized agents using Claude Code's native Task tool
- Built entirely on documented CLI features (no hacks)
- Pattern-based routing with automatic agent selection
- Hierarchical task decomposition
- Pe... | 2025-08-19T22:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mux1sv/one_prompt_orchestration_system_for_claude_code/ | Narrow-Culture7388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mux1sv | false | null | t3_1mux1sv | /r/LocalLLaMA/comments/1mux1sv/one_prompt_orchestration_system_for_claude_code/ | false | false | self | 2 | null |
Llama.cpp - non-AVX processors? | 2 | I don't suppose there's a build sitting somewhere for really old processors?
I need cuda but no avx. | 2025-08-19T22:11:23 | https://www.reddit.com/r/LocalLLaMA/comments/1muwsj6/llamacpp_nonavx_processors/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muwsj6 | false | null | t3_1muwsj6 | /r/LocalLLaMA/comments/1muwsj6/llamacpp_nonavx_processors/ | false | false | self | 2 | null |
Multi-Agent System for Claude Code - 7 Specialized AI Agents | 0 | Built a multi-agent orchestration system that transforms Claude Code into a complete development platform.
**Technical details:**
- 7 specialized agents with defined roles and tool access
- Pattern-based automatic routing
- Hierarchical task decomposition
- Persistent memory across sessions
- 100% native Claude Code f... | 2025-08-19T22:10:37 | https://www.reddit.com/r/LocalLLaMA/comments/1muwrtj/multiagent_system_for_claude_code_7_specialized/ | Narrow-Culture7388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muwrtj | false | null | t3_1muwrtj | /r/LocalLLaMA/comments/1muwrtj/multiagent_system_for_claude_code_7_specialized/ | false | false | self | 0 | null |
During one of my internships I've built phiDelta: a fully local Phi-4 based agentic assistant with dynamic RAG, multimodal search, and other tool integrations. Dropping it here as well if anyone’s interested! | 4 | 2025-08-19T22:10:19 | https://v.redd.it/ddeo55aou1kf1 | yagellaaether | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1muwrjr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ddeo55aou1kf1/DASHPlaylist.mpd?a=1758233485%2CMjNjOTEwYmJmYTBjNjVlMGJlMWIwMTIwOTIxNmNjNmVmYzM0M2NiZDVmMzQyOTA1Mjk0MWMyNjUxNzgwYWIyOQ%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/ddeo55aou1kf1/DASH_1080.mp4?source=fallback', '... | t3_1muwrjr | /r/LocalLLaMA/comments/1muwrjr/during_one_of_my_internships_ive_built_phidelta_a/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'ZXl3cWx6YW91MWtmMV0VJm6njORzkPg6xN62SLWfaEQV6Weajud80tibKDHW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZXl3cWx6YW91MWtmMV0VJm6njORzkPg6xN62SLWfaEQV6Weajud80tibKDHW.png?width=108&crop=smart&format=pjpg&auto=webp&s=cd7a475ffe8520fd2240416e4ec4e04b91f3a... | ||
Multi-Agent System for Claude Code - 7 Specialized AI Agents, No Dependencies | 1 | Built a multi-agent orchestration system for Claude Code that transforms it into a complete development platform.
**Technical Implementation:**
- 7 specialized agents (Orchestrator, Researcher, Planner, Implementer, Tester, Reviewer, Memory)
- Each agent has specific tool access and defined roles
- Uses Claude Code na... | 2025-08-19T22:05:36 | https://www.reddit.com/r/LocalLLaMA/comments/1muwn7x/multiagent_system_for_claude_code_7_specialized/ | Narrow-Culture7388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muwn7x | false | null | t3_1muwn7x | /r/LocalLLaMA/comments/1muwn7x/multiagent_system_for_claude_code_7_specialized/ | false | false | self | 1 | null |
After 5 years, Robots like to be controlled by another Robot 🤖😶 | 0 | Article: https://www.searchenginejournal.com/ai-systems-often-prefer-ai-written-content-study-finds/554025/ | 2025-08-19T21:47:15 | kvnptl_4400 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1muw6bt | false | null | t3_1muw6bt | /r/LocalLLaMA/comments/1muw6bt/after_5_years_robots_like_to_be_controlled_by/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 's31pr1vhq1kf1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/s31pr1vhq1kf1.png?width=108&crop=smart&auto=webp&s=fb835bcf66a91da5f36ad9840c076ca8f2615e70', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/s31pr1vhq1kf1.png?width=216&crop=smart&auto=we... | |
What are you using for agentic coding? | 16 | Hey, lately I've found myself jumping from app to app when it comes to coding with agents.
\- Cursor
\- Windsurf
\- Cline
\- Roo Code
\- Continue
\- Augment Code
\- Warp
The issue I've found lately is the value provided for how much I'm paying, I'm trying to keep costs low, I really liked the Auto featur... | 2025-08-19T21:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/1muw3b8/what_are_you_using_for_agentic_coding/ | Extension-Dog7011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muw3b8 | false | null | t3_1muw3b8 | /r/LocalLLaMA/comments/1muw3b8/what_are_you_using_for_agentic_coding/ | false | false | self | 16 | null |
Kimi K2 Appreciation Post | 48 | Dipping my toe into r/LocalLLaMA community engagement after being an orbiter for some time.
Kimi K2 is... something else. And I just have to ramble about it.
For context, the vast majority of my LLM experience is with open models, as I have very little use cases for closed alternatives. Meta's llama is the only onli... | 2025-08-19T21:33:27 | https://www.reddit.com/r/LocalLLaMA/comments/1muvta3/kimi_k2_appreciation_post/ | SweetHomeAbalama0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muvta3 | false | null | t3_1muvta3 | /r/LocalLLaMA/comments/1muvta3/kimi_k2_appreciation_post/ | false | false | self | 48 | null |
Why use fewer parameters when quantization is available? | 0 | Hey, I'm a noob trying to understand this. I tested Gemma3n-e4b-it-Q4M against Gemma3-12b-it-Q3KS and found that the first one works at the same speed as the second but performs better on my 8Gb GPU.
Why does the first model exist if the second one is better and faster?
Please be patient with your response—I’m stil... | 2025-08-19T21:24:00 | https://www.reddit.com/r/LocalLLaMA/comments/1muvk50/why_use_fewer_parameters_when_quantization_is/ | haterloco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muvk50 | false | null | t3_1muvk50 | /r/LocalLLaMA/comments/1muvk50/why_use_fewer_parameters_when_quantization_is/ | false | false | self | 0 | null |
Why this dont work at all? | 0 | 2025-08-19T20:57:38 | https://www.reddit.com/r/LocalLLaMA/comments/1muuu3u/why_this_dont_work_at_all/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muuu3u | false | null | t3_1muuu3u | /r/LocalLLaMA/comments/1muuu3u/why_this_dont_work_at_all/ | false | false | 0 | null | ||
my first ever prompt using local ai on my trusty 64gb flash drive & my work computer | 0 | 2025-08-19T20:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1muuprb/my_first_ever_prompt_using_local_ai_on_my_trusty/ | Nufty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muuprb | false | null | t3_1muuprb | /r/LocalLLaMA/comments/1muuprb/my_first_ever_prompt_using_local_ai_on_my_trusty/ | false | false | 0 | null | ||
Getting models to load in LM Studio [64GB Mac] | 0 | How to troubleshoot this? ... I have a 64GB Mac, and LM Studio Hardware info reports 48GB VRAM. But after grabbing a bunch of models\* with \~32 - 48GB file size, most of them will not load (with the "insufficient system resources" message).
Am I being naïve thinking these should work in 48GB? Even after using the m... | 2025-08-19T20:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1muukrg/getting_models_to_load_in_lm_studio_64gb_mac/ | PracticlySpeaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muukrg | false | null | t3_1muukrg | /r/LocalLLaMA/comments/1muukrg/getting_models_to_load_in_lm_studio_64gb_mac/ | false | false | self | 0 | null |
What to use for a local pixel art pipeline? | 4 | Hi all, I would like to install a local pipeline to generate pixel art (like [Pixel-Art.ai — AI Pixel Art Generator](https://pixel-art.ai/)).
I have experience with using ollama and openweb-ui, but not with image generating stuff. What tools would I need to do that?
I have an RTX3080 that I use for local llm's. | 2025-08-19T20:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1muu9sq/what_to_use_for_a_local_pixel_art_pipeline/ | mr_dfuse2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muu9sq | false | null | t3_1muu9sq | /r/LocalLLaMA/comments/1muu9sq/what_to_use_for_a_local_pixel_art_pipeline/ | false | false | self | 4 | null |
brute-llama - A llama.cpp llama-server testbench | 20 | I introduce brute-llama. I needed a tool to sweep through llama-server parameters and options to brute force good configs and find anomalies. Here it is. It does nothing special. It just runs llama-server and checks in nested loops and plots results. | 2025-08-19T20:18:07 | https://github.com/crashr/brute-llama | muxxington | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mutroq | false | null | t3_1mutroq | /r/LocalLLaMA/comments/1mutroq/brutellama_a_llamacpp_llamaserver_testbench/ | false | false | default | 20 | {'enabled': False, 'images': [{'id': '0kAodq2vpgwlNs8NkvpJTL0EKKMaEMge_I7TkUpawwA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0kAodq2vpgwlNs8NkvpJTL0EKKMaEMge_I7TkUpawwA.png?width=108&crop=smart&auto=webp&s=7c5a8722e8368bb2e8b105712e445b54563b67a5', 'width': 108}, {'height': 108, 'url': 'h... |
Using local LLM with low specs (4 Gb VRAM + 16 Gb RAM) | 2 | Hello! Does anyone here have experience with local LLMs in machines with low specs? Can they run it fine?
I have a laptop with 4 Gb VRAM and 16 Gb and I wanna try local LLMs for basic things for my job, like summarizing texts, comparing texts and so on.
I have asked some AIs to give me recommendations on local LLMs o... | 2025-08-19T20:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mutfmk/using_local_llm_with_low_specs_4_gb_vram_16_gb_ram/ | vascaino-taoista | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mutfmk | false | null | t3_1mutfmk | /r/LocalLLaMA/comments/1mutfmk/using_local_llm_with_low_specs_4_gb_vram_16_gb_ram/ | false | false | self | 2 | null |
What's the Best "Non-Thinking" AI Models to Use? | 4 | I currently have DeepSeek R1 and the thinking model takes a lot of time for the things that I need for work. Do you guys have any recommendations for what to download on LM Studio?
My Specs:
CPU: AMD Ryzen 9 9950X3D | GPU: RTX 5090 | Ram: 64GB (2x32GB)
| 2025-08-19T20:03:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mutcx8/whats_the_best_nonthinking_ai_models_to_use/ | Zexui | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mutcx8 | false | null | t3_1mutcx8 | /r/LocalLLaMA/comments/1mutcx8/whats_the_best_nonthinking_ai_models_to_use/ | false | false | self | 4 | null |
Why Your Prompts Need Version Control (And How Open Source ModelKits Make It Simple) | 2 | 2025-08-19T19:40:42 | https://medium.com/p/f8722fde4550 | iamjessew | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1musqcg | false | null | t3_1musqcg | /r/LocalLLaMA/comments/1musqcg/why_your_prompts_need_version_control_and_how/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': '66GbABOnxe6unPatM84n1kIXOecGN6aopawv4PycxdU', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/66GbABOnxe6unPatM84n1kIXOecGN6aopawv4PycxdU.jpeg?width=108&crop=smart&auto=webp&s=c78d2500873e51fc5cee1d20c00a17440931093c', 'width': 108}, {'height': 135, 'url': '... | |
Follow-up: Looking for a local RAG + chatbot solution for our machine manual | 13 | In my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1mqwrry/how_can_we_train_an_opensource_ai_model_with_a/), many suggested that I should use a **RAG agent** for our machine manual AI project. After doing some research, I couldn’t find a good **prebuilt solution** that fits our requirements.
**Require... | 2025-08-19T19:38:25 | https://www.reddit.com/r/LocalLLaMA/comments/1muso0a/followup_looking_for_a_local_rag_chatbot_solution/ | ReserveOdd1984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muso0a | false | null | t3_1muso0a | /r/LocalLLaMA/comments/1muso0a/followup_looking_for_a_local_rag_chatbot_solution/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'POvJyTC3sc92GY8Ed-FMPj2qXHbNFPh5o6K7508kZ3c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/POvJyTC3sc92GY8Ed-FMPj2qXHbNFPh5o6K7508kZ3c.png?width=108&crop=smart&auto=webp&s=383e175b51ab9f7ae8e287a88bd0c360cc8eaa77', 'width': 108}, {'height': 108, 'url': 'h... |
I tried to get 600 dollars "deep think" for local models by making them argue with each other for hours. It's slow, but it's interesting | 77 | I've been thinking a lot about how we, as people, develop ideas. It's rarely a single, brilliant flash of insight. Our minds are shaped by the countless small interactions we have throughout the day—a conversation here, an article there. This environment of constant, varied input seems just as important as the act of t... | 2025-08-19T19:35:53 | https://www.reddit.com/r/LocalLLaMA/comments/1muslis/i_tried_to_get_600_dollars_deep_think_for_local/ | Temporary_Exam_3620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muslis | false | null | t3_1muslis | /r/LocalLLaMA/comments/1muslis/i_tried_to_get_600_dollars_deep_think_for_local/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': '_rA3E0OLjQy18vrihMXYwc-BW-2XNt2JU7Yz6vvriAA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_rA3E0OLjQy18vrihMXYwc-BW-2XNt2JU7Yz6vvriAA.jpeg?width=108&crop=smart&auto=webp&s=b1ccc0e6f9a64361c02f6aff4406586a20a8918a', 'width': 108}, {'height': 216, 'url': ... |
New Intel LLM-Software Update & ARC PRO B60 DUAL Retail(?) Possibility | 1 | [removed] | 2025-08-19T19:31:23 | Mundane_Progress_898 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mush88 | false | null | t3_1mush88 | /r/LocalLLaMA/comments/1mush88/new_intel_llmsoftware_update_arc_pro_b60_dual/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '9wuzbrpr21kf1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/9wuzbrpr21kf1.png?width=108&crop=smart&auto=webp&s=56ef4c57171c76a7932caa979bc22bac527fb1ce', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/9wuzbrpr21kf1.png?width=216&crop=smart&auto=we... | |
(fan made) Introducing the Qwen Ball | 0 | Silly idea I had | 2025-08-19T19:30:51 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1musgoz | false | null | t3_1musgoz | /r/LocalLLaMA/comments/1musgoz/fan_made_introducing_the_qwen_ball/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xih4r8lu21kf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/xih4r8lu21kf1.png?width=108&crop=smart&auto=webp&s=bdc2157e5cb2cecaedfebea3128299ad5e73ab0f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/xih4r8lu21kf1.png?width=216&crop=smart&auto=we... | |
I tried to get "deep thought" for local models by making them argue with each other for hours. It's slow, but it's interesting | 1 | I've been thinking a lot about how we, as people, develop ideas. It's rarely a single, brilliant flash of insight. Our minds are shaped by the countless small interactions we have throughout the day—a conversation here, an article there. This environment of constant, varied input seems just as important as the act of t... | 2025-08-19T19:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1muscko/i_tried_to_get_deep_thought_for_local_models_by/ | Temporary_Exam_3620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muscko | false | null | t3_1muscko | /r/LocalLLaMA/comments/1muscko/i_tried_to_get_deep_thought_for_local_models_by/ | false | false | self | 1 | null |
How to add pdf extract abilities | 3 | I am using Dolphin 2.9.1 (Llama 3 70B
Uncensored) model, i am running it on runpod using open web ui , I have added web search to it using tavily api.
Now I want it to search get pdf and extract pdf and answer to me accordingly I know i can use RAG and upload pdf and then chat with it but cant I automate it like it re... | 2025-08-19T19:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mus03v/how_to_add_pdf_extract_abilities/ | No_Paramedic6481 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mus03v | false | null | t3_1mus03v | /r/LocalLLaMA/comments/1mus03v/how_to_add_pdf_extract_abilities/ | false | false | self | 3 | null |
Can't be the only one who finds this funny | 165 | 2025-08-19T18:53:25 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1murf0u | false | null | t3_1murf0u | /r/LocalLLaMA/comments/1murf0u/cant_be_the_only_one_who_finds_this_funny/ | false | false | default | 165 | {'enabled': True, 'images': [{'id': 'm98jn9stu0kf1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/m98jn9stu0kf1.png?width=108&crop=smart&auto=webp&s=97cf23590af5fe059fb8b04ab6b71d7911584ac7', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/m98jn9stu0kf1.png?width=216&crop=smart&auto=web... | ||
GPT-oss performs like Llama 4 Maverick on Fiction.liveBench | 40 | 2025-08-19T18:49:30 | Charuru | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1muraxw | false | null | t3_1muraxw | /r/LocalLLaMA/comments/1muraxw/gptoss_performs_like_llama_4_maverick_on/ | false | false | default | 40 | {'enabled': True, 'images': [{'id': 'r6tk8zj6v0kf1', 'resolutions': [{'height': 172, 'url': 'https://preview.redd.it/r6tk8zj6v0kf1.png?width=108&crop=smart&auto=webp&s=38b25b125e9a4f684b70dd18347e6f006fa2abb6', 'width': 108}, {'height': 344, 'url': 'https://preview.redd.it/r6tk8zj6v0kf1.png?width=216&crop=smart&auto=we... | ||
Nvidia charged with patent infringement for DGX technology. | 75 | Will the DGX spark ever launch? Maybe Nvidia can buy out this company. Is it time to just buy a AMD AI395 clone or Apple M5 chip mac mini for desktop development running LLM's locally. | 2025-08-19T18:34:16 | https://www.techzine.eu/news/infrastructure/133818/nvidia-under-fire-german-patent-lawsuit/ | Red_Phoenix_69 | techzine.eu | 1970-01-01T00:00:00 | 0 | {} | 1muqvcj | false | null | t3_1muqvcj | /r/LocalLLaMA/comments/1muqvcj/nvidia_charged_with_patent_infringement_for_dgx/ | false | false | default | 75 | {'enabled': False, 'images': [{'id': '0ADfQU54uUeiTQ2u5_D5DC4J8Vcm-QU6r2UvKAucrBk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/0ADfQU54uUeiTQ2u5_D5DC4J8Vcm-QU6r2UvKAucrBk.png?width=108&crop=smart&auto=webp&s=8d5663b66749a55ade119af493a4faded323bced', 'width': 108}, {'height': 144, 'url': 'h... |
thank you for keeping me safe, gpt oss | 56 | 2025-08-19T18:16:48 | usualuzi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1muqe04 | false | null | t3_1muqe04 | /r/LocalLLaMA/comments/1muqe04/thank_you_for_keeping_me_safe_gpt_oss/ | false | false | 56 | {'enabled': True, 'images': [{'id': '8SEL7LPxtHz-ZIXTMI88FCUCgAeqeTjCRHAHmcflm9Y', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/9ifw9cpbp0kf1.png?width=108&crop=smart&auto=webp&s=1d8fe33132d252d65cb8f799934ec1fb4b52e333', 'width': 108}, {'height': 235, 'url': 'https://preview.redd.it/9ifw9cpbp0kf1.pn... | |||
Deepseek v3.1 scores 71.6% on aider – non-reasoning sota | 236 | ```
- dirname: 2025-08-19-17-08-33--deepseek-v3.1
test_cases: 225
model: deepseek/deepseek-chat
edit_format: diff
commit_hash: 32faf82
pass_rate_1: 41.3
pass_rate_2: 71.6
pass_num_1: 93
pass_num_2: 161
percent_cases_well_formed: 95.6
error_outputs: 13
num_malformed_responses: 11
num_with_malform... | 2025-08-19T18:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1muq72y/deepseek_v31_scores_716_on_aider_nonreasoning_sota/ | Similar-Cycle8413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muq72y | false | null | t3_1muq72y | /r/LocalLLaMA/comments/1muq72y/deepseek_v31_scores_716_on_aider_nonreasoning_sota/ | false | false | self | 236 | null |
With the rising trends of finetuning small language model, data engineering will be needed even more. | 27 | We're seeing a flood of compact language models hitting the market weekly - Gemma3 270M, LFM2 1.2B, SmolLM3 3B, and many others. The pattern is always the same: organizations release these models with a disclaimer essentially saying "this performs poorly out-of-the-box, but fine-tune it for your specific use case and w... | 2025-08-19T18:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/1muq5bv/with_the_rising_trends_of_finetuning_small/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muq5bv | false | null | t3_1muq5bv | /r/LocalLLaMA/comments/1muq5bv/with_the_rising_trends_of_finetuning_small/ | false | false | self | 27 | null |
Advice of Macbook choice for LLM and ML related development | 2 | Hi all, I am changing my role a bit and switching more to AI/LLM prototyping/research/development in my company, which is more hands-on than my previous role of architecture and strategy.
This coincides with the time to upgrade my work laptop, and after reading, within my current budget are two options:
\- M4 Max w... | 2025-08-19T17:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mups4o/advice_of_macbook_choice_for_llm_and_ml_related/ | iezhy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mups4o | false | null | t3_1mups4o | /r/LocalLLaMA/comments/1mups4o/advice_of_macbook_choice_for_llm_and_ml_related/ | false | false | self | 2 | null |
Built agents using Qwen for Fortune 500 companies. Sharing my complete technical playbook (enterprise GPU clusters, fine-tuning, production challenges) | 1 | [removed] | 2025-08-19T17:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1muprtv/built_agents_using_qwen_for_fortune_500_companies/ | Low_Acanthisitta7686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1muprtv | false | null | t3_1muprtv | /r/LocalLLaMA/comments/1muprtv/built_agents_using_qwen_for_fortune_500_companies/ | false | false | self | 1 | null |
ERNIE 4.5 jailbreak? | 6 | did anyone manage to execute any system prompt via llama.cpp jinja chat template? I've added some instructions into the "system" section of the template offered by llama.cpp but Ernie completely ignores them. Maybe Ernie 4.5 support in llama.cpp is incomplete?
This is the default template:
{%- if not add_generati... | 2025-08-19T17:52:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mupotq/ernie_45_jailbreak/ | MelodicRecognition7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mupotq | false | null | t3_1mupotq | /r/LocalLLaMA/comments/1mupotq/ernie_45_jailbreak/ | false | false | self | 6 | null |
local ai agent option same as chatgpt agent option | 3 | hello, I'm a beginner in this area and I would like a 100% local and free option for chat gpt agent option that has 100% access to my browser. I think I will use gpt-oss-20B or llama 3.1 | 2025-08-19T17:33:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mup5v7/local_ai_agent_option_same_as_chatgpt_agent_option/ | Unusual-Procedure971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mup5v7 | false | null | t3_1mup5v7 | /r/LocalLLaMA/comments/1mup5v7/local_ai_agent_option_same_as_chatgpt_agent_option/ | false | false | self | 3 | null |
Deepseek v3.1 | 8 | Looks like there are two models in the collection, but only one is visible. We’ll need to guess whether the hidden one is V3.1 Instruct or DeepSeek Coder. | 2025-08-19T17:19:45 | Ok-Pattern9779 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1muorzm | false | null | t3_1muorzm | /r/LocalLLaMA/comments/1muorzm/deepseek_v31/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'weoq6pygf0kf1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/weoq6pygf0kf1.jpeg?width=108&crop=smart&auto=webp&s=f817e5b584407bc9231c29e1f7d0037c375af30f', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/weoq6pygf0kf1.jpeg?width=216&crop=smart&auto=we... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.