title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Are ~70B Models Going Out of Fashion? | 146 | Around a year and a half on from my post about 24GB vs 48GB VRAM, I personally find that the scene has changed a lot in terms of what sizes of models are popularly available and used.
Back then, 48GB VRAM for 70B models at 4BPW was more or less the gold standard for local inference. This is back when The Bloke was st... | 2025-07-27T10:57:30 | https://www.reddit.com/r/LocalLLaMA/comments/1majfwi/are_70b_models_going_out_of_fashion/ | HvskyAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1majfwi | false | null | t3_1majfwi | /r/LocalLLaMA/comments/1majfwi/are_70b_models_going_out_of_fashion/ | false | false | self | 146 | null |
Suprise suprise!! | 1,006 | 2025-07-27T10:55:19 | GoodGuyLafarge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1majemr | false | null | t3_1majemr | /r/LocalLLaMA/comments/1majemr/suprise_suprise/ | false | false | default | 1,006 | {'enabled': True, 'images': [{'id': 'k64e9lwtdeff1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/k64e9lwtdeff1.png?width=108&crop=smart&auto=webp&s=654734e23bb5447e379cf550989c3fbafc64f227', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/k64e9lwtdeff1.png?width=216&crop=smart&auto=web... | ||
Sell my 5070ti to get a 3090 | 0 | As the title suggests, I am thinking of selling my 16gb 5070 ti, but I’d get a 3090 (and some money back in my pocket) to run local LLM’s.
I’m building a pipeline that will essentially help me gather news/tech news and keep me informed so I can ask it specific questions and save time instead of watching many differen... | 2025-07-27T10:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/1maj65f/sell_my_5070ti_to_get_a_3090/ | Finallyhaveredditt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maj65f | false | null | t3_1maj65f | /r/LocalLLaMA/comments/1maj65f/sell_my_5070ti_to_get_a_3090/ | false | false | self | 0 | null |
Sub 3k best local LLM setup upgrade from 4070 super ti setup? | 1 | [removed] | 2025-07-27T10:35:17 | https://www.reddit.com/r/LocalLLaMA/comments/1maj2w8/sub_3k_best_local_llm_setup_upgrade_from_4070/ | thecookingsenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maj2w8 | false | null | t3_1maj2w8 | /r/LocalLLaMA/comments/1maj2w8/sub_3k_best_local_llm_setup_upgrade_from_4070/ | false | false | self | 1 | null |
I tried implementing the CRISP paper from Google Deepmind in Python | 37 | I spent the weekend analyzing this open-source PyTorch implementation of Google's [CRISP paper (arXiv:2505.11471)](https://arxiv.org/pdf/2505.11471). The repository provides a direct, hands-on comparison between CRISP's in-training clustering and the more traditional post-hoc approach.
**My conclusion: The repository'... | 2025-07-27T10:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/1maixye/i_tried_implementing_the_crisp_paper_from_google/ | Ok_Rub1689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maixye | false | null | t3_1maixye | /r/LocalLLaMA/comments/1maixye/i_tried_implementing_the_crisp_paper_from_google/ | false | false | self | 37 | null |
A new 21B-A3B model that can run 30 token/s on i9 CPU | 243 | 2025-07-27T10:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1maipzo/a_new_21ba3b_model_that_can_run_30_tokens_on_i9/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maipzo | false | null | t3_1maipzo | /r/LocalLLaMA/comments/1maipzo/a_new_21ba3b_model_that_can_run_30_tokens_on_i9/ | false | false | self | 243 | {'enabled': False, 'images': [{'id': 'oKW2EqBWyvLdyTeoAGbQ_-d8-23kNb7Q9kBmGRYJM1E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oKW2EqBWyvLdyTeoAGbQ_-d8-23kNb7Q9kBmGRYJM1E.png?width=108&crop=smart&auto=webp&s=33918a5d809431c198816a64f4512804c3bb5409', 'width': 108}, {'height': 116, 'url': 'h... | |
PowerInfer/SmallThinker-21BA3B-Instruct · Hugging Face | 63 | 2025-07-27T10:11:05 | https://huggingface.co/PowerInfer/SmallThinker-21BA3B-Instruct | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1maipjy | false | null | t3_1maipjy | /r/LocalLLaMA/comments/1maipjy/powerinfersmallthinker21ba3binstruct_hugging_face/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': 'oKW2EqBWyvLdyTeoAGbQ_-d8-23kNb7Q9kBmGRYJM1E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oKW2EqBWyvLdyTeoAGbQ_-d8-23kNb7Q9kBmGRYJM1E.png?width=108&crop=smart&auto=webp&s=33918a5d809431c198816a64f4512804c3bb5409', 'width': 108}, {'height': 116, 'url': 'h... | |
A new 21B-A3B model which can run 20 token/s on i9 14900 | 1 | 2025-07-27T10:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1maioul/a_new_21ba3b_model_which_can_run_20_tokens_on_i9/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maioul | false | null | t3_1maioul | /r/LocalLLaMA/comments/1maioul/a_new_21ba3b_model_which_can_run_20_tokens_on_i9/ | false | false | 1 | null | ||
RTX 4090 vs RTX 5060 ....Is the 5060 even worth considering for local LLMs? | 0 | Been seeing some hype around the upcoming RTX 5060 (Blackwell series), and I wanted to throw this out to folks doing serious local inference: how does it *really* stack up against the tried-and-tested 4090?
If your goal is real local AI use (fast generation, agent chains, even fine-tuning), don’t let the generational... | 2025-07-27T08:54:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mahjoo/rtx_4090_vs_rtx_5060_is_the_5060_even_worth/ | No_Edge2098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mahjoo | false | null | t3_1mahjoo | /r/LocalLLaMA/comments/1mahjoo/rtx_4090_vs_rtx_5060_is_the_5060_even_worth/ | false | false | self | 0 | null |
I built a viral AI tool for studying- got 12k users within 1 month of launch | 0 | recently built an Al tool called NexNotes Al, this Al tool can generate multiple things just from a single PPT, PDF,DOC, image or even an article- like 5 Al tools combined in a single tool. Here's what it does - Generate TimeTables from content (new) Generate ppts from prompts (customizable)
Generate mind maps
Genera... | 2025-07-27T08:47:41 | https://v.redd.it/s6zopkf4rdff1 | anonymously_geek | /r/LocalLLaMA/comments/1mahg8p/i_built_a_viral_ai_tool_for_studying_got_12k/ | 1970-01-01T00:00:00 | 0 | {} | 1mahg8p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s6zopkf4rdff1/DASHPlaylist.mpd?a=1756327670%2CZTlkOWQ2ZmVlOWI4Yzg0NTBlZGI1OGZhMDFjNjAxNGJhYjhiY2FkZTkyNTBmYzRiYzhlYTI0YjMzMmY2NGNhZA%3D%3D&v=1&f=sd', 'duration': 135, 'fallback_url': 'https://v.redd.it/s6zopkf4rdff1/DASH_1080.mp4?source=fallback', '... | t3_1mahg8p | /r/LocalLLaMA/comments/1mahg8p/i_built_a_viral_ai_tool_for_studying_got_12k/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dG8zd3ltZTRyZGZmMf6fSGNinQswqafWdBbE2gwmCEULcEwTIZ-1AJncB5ZD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dG8zd3ltZTRyZGZmMf6fSGNinQswqafWdBbE2gwmCEULcEwTIZ-1AJncB5ZD.png?width=108&crop=smart&format=pjpg&auto=webp&s=75e7542420c0f5877d3f415f4ce074b07d5dc... | |
I do not build a new ai agent without first setting up monitoring and eval dataset anymore. Do you? What FOSS do you use for that? | 0 | 2025-07-27T08:26:22 | https://opensourcedisc.substack.com/p/opensourcediscovery-99-opik | opensourcecolumbus | opensourcedisc.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1mah4oj | false | null | t3_1mah4oj | /r/LocalLLaMA/comments/1mah4oj/i_do_not_build_a_new_ai_agent_without_first/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'B2M9C81m1pa-nY2djx5F1qOYPYxqsNz0WBH1hUXl0DM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B2M9C81m1pa-nY2djx5F1qOYPYxqsNz0WBH1hUXl0DM.jpeg?width=108&crop=smart&auto=webp&s=5e9832bd37e35af858d348e27f797955845b550b', 'width': 108}, {'height': 108, 'url': '... | |
4-bit Qwen ain’t fast, but it’s family. | 0 | 2025-07-27T08:01:56 | NullPointerJack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1magr76 | false | null | t3_1magr76 | /r/LocalLLaMA/comments/1magr76/4bit_qwen_aint_fast_but_its_family/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '6pc6medyidff1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/6pc6medyidff1.png?width=108&crop=smart&auto=webp&s=4c8262902da72640243312ae67071dd7b4048f4e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/6pc6medyidff1.png?width=216&crop=smart&auto=we... | ||
AlphaGo Moment for Model Architecture Discovery | 1 | 2025-07-27T07:11:13 | https://arxiv.org/pdf/2507.18074 | vladlearns | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mafyyp | false | null | t3_1mafyyp | /r/LocalLLaMA/comments/1mafyyp/alphago_moment_for_model_architecture_discovery/ | false | false | default | 1 | null | |
I built an Overlay AI. | 1 | I built an Overlay AI.
source code: [https://github.com/kamlendras/aerogel](https://github.com/kamlendras/aerogel) | 2025-07-27T06:48:01 | https://v.redd.it/0l6ttkdl5dff1 | kamlendras | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1maflh5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/0l6ttkdl5dff1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/0l6ttkdl5dff1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/0l6ttkdl5dff1/DASHPlaylist.mpd?a=1756190899%2CMD... | t3_1maflh5 | /r/LocalLLaMA/comments/1maflh5/i_built_an_overlay_ai/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/emswOXhrZGw1ZGZmMSoKrI-6nwPl5Obl65Jwi_LBPrT12vkaeVKztVPL6I1W.png?format=pjpg&auto=webp&s=fad99bc3e71313ea5ab3b0a0ad86e73e8dfa593e', 'width': 1920, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/emswOXhrZGw1ZGZmMSoKrI-6nwPl5Obl6... | |
Just find it for you ❤️❤️ | 1 | 2025-07-27T06:13:50 | https://v.redd.it/bm3qkgxozcff1 | Ok_Excuse_3304 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1maf1s3 | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/bm3qkgxozcff1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'width': 720, 'scrubber_media_url': 'https://v.redd.it/bm3qkgxozcff1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/bm3qkgxozcff1/DASHPlaylist.mpd?a=1756188845%2CZDcyN... | t3_1maf1s3 | /r/LocalLLaMA/comments/1maf1s3/just_find_it_for_you/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/c3R5Z3JtMXB6Y2ZmMT8Kgzqc3y2_K1RXk23VMGqT5u31BbGHQMrhOKPWdOwW.png?format=pjpg&auto=webp&s=df8fcb85c28a853fd59ae1d5877b4810447eeafa', 'width': 659, 'height': 1173}, 'resolutions': [{'url': 'https://external-preview.redd.it/c3R5Z3JtMXB6Y2ZmMT8Kgzqc3y2_K1RXk2... | ||
I just created with meta ai | 1 | 2025-07-27T06:06:30 | https://v.redd.it/8udrp4vdycff1 | Ok_Excuse_3304 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1maexli | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/8udrp4vdycff1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'width': 720, 'scrubber_media_url': 'https://v.redd.it/8udrp4vdycff1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/8udrp4vdycff1/DASHPlaylist.mpd?a=1756188407%2CMDY3Z... | t3_1maexli | /r/LocalLLaMA/comments/1maexli/i_just_created_with_meta_ai/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/cjF3NDFmdWR5Y2ZmMR1sR3D_KKvg4gmDEo6xgEfTOxQ1QVauTtuLw1uw2Yb9.png?format=pjpg&auto=webp&s=37895f8c0a4265da059a9d3a33d4f1511af03287', 'width': 720, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/cjF3NDFmdWR5Y2ZmMR1sR3D_KKvg4gmDEo6... | ||
What will happen to an llm when you double the RoPE scaling factor? | 1 | I diffed the config.json between Llama-3\_3-Nemotron-Super-49B-v1 and Llama-3\_3-Nemotron-Super-49B-v1\_5. I noticed the only difference is that the newer model doubled the RoPE scaling factor from 8 to 16. What effect does this make to the model's performance? | 2025-07-27T06:01:47 | https://www.reddit.com/r/LocalLLaMA/comments/1maeuuo/what_will_happen_to_an_llm_when_you_double_the/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maeuuo | false | null | t3_1maeuuo | /r/LocalLLaMA/comments/1maeuuo/what_will_happen_to_an_llm_when_you_double_the/ | false | false | self | 1 | null |
this actually made me feel so relieved haha | 1 | 2025-07-27T05:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1maeg4y/this_actually_made_me_feel_so_relieved_haha/ | Current_Housing_7294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maeg4y | false | null | t3_1maeg4y | /r/LocalLLaMA/comments/1maeg4y/this_actually_made_me_feel_so_relieved_haha/ | false | false | 1 | null | ||
Wan 2.2 coming out Monday July 28th | 1 | 2025-07-27T05:17:50 | Comed_Ai_n | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mae4yz | false | null | t3_1mae4yz | /r/LocalLLaMA/comments/1mae4yz/wan_22_coming_out_monday_july_28th/ | false | false | default | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/6fhk0wjppcff1.jpeg?auto=webp&s=fec6294a55a3e104e1bb18786c446c76d3380ada', 'width': 1320, 'height': 738}, 'resolutions': [{'url': 'https://preview.redd.it/6fhk0wjppcff1.jpeg?width=108&crop=smart&auto=webp&s=b03db1c6122c5627833ddc9e2fdbcb6d6f7ed744', 'width': 108, '... | ||
Summarize medium length text on local model with 8gb vram | 6 | I have a 6000 words text length, and I would like to summarize the text and extract the most interesting points.
I don't mind waiting for the response if it means getting better approach, what I tried so far was splitting the text into small chunks and then summarize each chunk (while having small over lap window), th... | 2025-07-27T05:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/1madv3y/summarize_medium_length_text_on_local_model_with/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1madv3y | false | null | t3_1madv3y | /r/LocalLLaMA/comments/1madv3y/summarize_medium_length_text_on_local_model_with/ | false | false | self | 6 | null |
What inference engine should I use to fully use my budget rug? | 0 | I’ve got a 2x 3090 with 128gb of Ram on a 16 core ryzen 9. What should I use so that I can fully load the GPUs and also the CPU/RAM? Will ollama automatically use what I put in front of it? | 2025-07-27T04:58:59 | https://www.reddit.com/r/LocalLLaMA/comments/1madt6e/what_inference_engine_should_i_use_to_fully_use/ | bidet_enthusiast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1madt6e | false | null | t3_1madt6e | /r/LocalLLaMA/comments/1madt6e/what_inference_engine_should_i_use_to_fully_use/ | false | false | self | 0 | null |
Anyone else been using the new nvidia/Llama-3_3-Nemotron-Super-49B-v1_5 model? | 49 | Its great! It's a clear step above Qwen3 32b imo. Id recommend trying it out
My experience with it:
- it generates far less "slop" than Qwen models
- it handles long context really well
- it easily handles trick questions like "What should be the punishment for looking at your opponent's board in chess?"
- handled a... | 2025-07-27T04:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1madjq6/anyone_else_been_using_the_new_nvidiallama3/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1madjq6 | false | null | t3_1madjq6 | /r/LocalLLaMA/comments/1madjq6/anyone_else_been_using_the_new_nvidiallama3/ | false | false | self | 49 | null |
How Are You Running Multimodal (Text-Image) Models Locally? | 4 | Honestly, pretty much the question in the Header. Specifically, I'm trying to run InternVL3-78B or the new Intern-S1 model locally, but it's a challenge. VLLM and lmserve support the InternVL models, but appear to be GPU-only, and llama.cpp seems flaky at best when it comes to running them. (Massive hallucinations, err... | 2025-07-27T04:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mad6sy/how_are_you_running_multimodal_textimage_models/ | Stickman561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mad6sy | false | null | t3_1mad6sy | /r/LocalLLaMA/comments/1mad6sy/how_are_you_running_multimodal_textimage_models/ | false | false | self | 4 | null |
Claude Code Alternative Recommendations? | 3 | Hey folks, I'm a self-hosting noob looking for recommendations for good self-hosted/foss/local/private/etc alternative to Claude Code's CLI tool. I recently started using at work and am blown away by how good it is. Would love to have something similar for myself. I have a 12GB VRAM RTX 3060 GPU with Ollama running in ... | 2025-07-27T03:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1macmej/claude_code_alternative_recommendations/ | VashyTheNexian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1macmej | false | null | t3_1macmej | /r/LocalLLaMA/comments/1macmej/claude_code_alternative_recommendations/ | false | false | self | 3 | null |
Tencent releases Hunyuan3D World Model 1.0 - first open-source 3D world generation model | 578 | 2025-07-27T02:28:05 | https://x.com/TencentHunyuan/status/1949288986192834718 | pseudoreddituser | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mab2i2 | false | null | t3_1mab2i2 | /r/LocalLLaMA/comments/1mab2i2/tencent_releases_hunyuan3d_world_model_10_first/ | false | false | default | 578 | null | |
Strategies for handling transient Server-Sent Events (SSE) from LLM responses | 5 | This is less related to models, and more related to model interactions, but would love for the community to offer feedback on an internal debate.
We see a lot of traffic flow through our oss edge/service proxy for LLM-based apps. This includes local models served via vLLM and Ollama. One failure mode that most recentl... | 2025-07-27T02:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mab16n/strategies_for_handling_transient_serversent/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mab16n | false | null | t3_1mab16n | /r/LocalLLaMA/comments/1mab16n/strategies_for_handling_transient_serversent/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '0ZWoFzGweGNW0rtaFiJo7cgwtA2lmAaS7it_7nc7p60', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0ZWoFzGweGNW0rtaFiJo7cgwtA2lmAaS7it_7nc7p60.png?width=108&crop=smart&auto=webp&s=e36c3744ed0552e2b02b7b82f9390cbd418feb4b', 'width': 108}, {'height': 108, 'url': 'h... |
How do LLMs understand massive csv data, sometimes even databases? | 2 | I see several tools nowadays that when you upload a csv file, it lets you talk to the LLM about the data in these files, what kind of parsing is done here (I’ve tried excel parsing in the past, but it’s no where this good)? Sometimes this works with databases as well. Really curious about the underlying approach to thi... | 2025-07-27T02:07:27 | https://www.reddit.com/r/LocalLLaMA/comments/1maao56/how_do_llms_understand_massive_csv_data_sometimes/ | subtle-being | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maao56 | false | null | t3_1maao56 | /r/LocalLLaMA/comments/1maao56/how_do_llms_understand_massive_csv_data_sometimes/ | false | false | self | 2 | null |
Intern-S1 just dropped: A 235B MoE + 6B Vision open-source model trained on 5T science-focused tokens. They're claiming SOTA on scientific tasks — has anyone tried it yet? Could this be the ultimate open-source research assistant? | 1 | [removed] | 2025-07-27T01:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/1maab1p/interns1_just_dropped_a_235b_moe_6b_vision/ | SmartFlowAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maab1p | false | null | t3_1maab1p | /r/LocalLLaMA/comments/1maab1p/interns1_just_dropped_a_235b_moe_6b_vision/ | false | false | 1 | null | |
Intern-S1 just dropped: A 235B MoE + 6B Vision open-source model trained on 5T science-focused tokens. They're claiming SOTA on scientific tasks — has anyone tried it yet? Could this be the ultimate open-source research assistant? | 1 | [removed] | 2025-07-27T01:44:21 | https://www.reddit.com/r/LocalLLaMA/comments/1maa7dc/interns1_just_dropped_a_235b_moe_6b_vision/ | vansinhu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maa7dc | false | null | t3_1maa7dc | /r/LocalLLaMA/comments/1maa7dc/interns1_just_dropped_a_235b_moe_6b_vision/ | false | false | 1 | null | |
How do I plug second psu into something so it will run my other gpu’s- Corsair hx1500i power supply | 4 | Hey LocalLlama
I’m building a rig with 6x 3090 and I have the motherboard and 3 GPU’s connected to one Corsair hx1500i.
It seems that the other hx1500i power supply will not turn on at all and I think it’s because it needs to have an active motherboard cable plugged in.
Does anyone know how to address this? | 2025-07-27T01:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ma9t22/how_do_i_plug_second_psu_into_something_so_it/ | Business-Weekend-537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma9t22 | false | null | t3_1ma9t22 | /r/LocalLLaMA/comments/1ma9t22/how_do_i_plug_second_psu_into_something_so_it/ | false | false | self | 4 | null |
Local LLM is more important than ever | 309 | 2025-07-27T00:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ma8yua/local_llm_is_more_important_than_ever/ | NeedleworkerDull7886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma8yua | false | null | t3_1ma8yua | /r/LocalLLaMA/comments/1ma8yua/local_llm_is_more_important_than_ever/ | false | false | 309 | null | ||
Have you ever experienced an AI-Based Romantic Relationship or Friendship? | 1 | [removed] | 2025-07-27T00:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ma8mcc/have_you_ever_experienced_an_aibased_romantic/ | Legitimate-String843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma8mcc | false | null | t3_1ma8mcc | /r/LocalLLaMA/comments/1ma8mcc/have_you_ever_experienced_an_aibased_romantic/ | false | false | self | 1 | null |
South Park Trump Deepfake - How do you think they made it? | 0 | Anyone have any thoughts on how Trey and Matt made the Trump PSA in the season 27 premier this week? Lord knows that didn't come out of Veo or Sora.
https://x.com/HuffPostEnt/status/1948308665125011945 | 2025-07-27T00:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ma8jks/south_park_trump_deepfake_how_do_you_think_they/ | mj3815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma8jks | false | null | t3_1ma8jks | /r/LocalLLaMA/comments/1ma8jks/south_park_trump_deepfake_how_do_you_think_they/ | false | false | self | 0 | null |
hay everyone I'm new here help please | 0 | Yo, I’m new to this whole local AI model thing. My setup’s got 16GB RAM and a GTX1650 with 4GB VRAM—yeah, I know it’s weak.
I started with the model **mythomax-l2-13b.Q5\_K\_S.gguf** (yeah, kinda overkill for my setup) running on **oobabooga/text-generation-webui**. First time I tried it, everything worked fine—chat m... | 2025-07-27T00:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ma8iez/hay_everyone_im_new_here_help_please/ | -Fibon4cci | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma8iez | false | null | t3_1ma8iez | /r/LocalLLaMA/comments/1ma8iez/hay_everyone_im_new_here_help_please/ | false | false | self | 0 | null |
Best vLLM for pill imprint/textOCR? | 0 | Testing Qwen2.5-VL-7B for pill/imprint text extraction.
Wondering if any of you would know of a vLLM that would work well for this use case.
Looking for best options for pharmaceutical OCR (imprint codes, dosages) that are:
- More accurate
- Easier RunPod deployment
- Better price/performance
Any experience with LL... | 2025-07-27T00:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ma89au/best_vllm_for_pill_imprinttextocr/ | Virtual_Attitude2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma89au | false | null | t3_1ma89au | /r/LocalLLaMA/comments/1ma89au/best_vllm_for_pill_imprinttextocr/ | false | false | self | 0 | null |
FULL Lovable Agent System Prompt and Tools [UPDATED] | 15 | (Latest update: 27/07/2025)
I've just extracted the FULL Lovable Agent system prompt and internal tools (Latest update). Over 600 lines (Around 10k tokens).
You can check it out here: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools/ | 2025-07-27T00:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ma88wd/full_lovable_agent_system_prompt_and_tools_updated/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma88wd | false | null | t3_1ma88wd | /r/LocalLLaMA/comments/1ma88wd/full_lovable_agent_system_prompt_and_tools_updated/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'NODN066BA8UYUXFthrn_kVBJ2bZkZU8gTZ9RETJrKts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NODN066BA8UYUXFthrn_kVBJ2bZkZU8gTZ9RETJrKts.png?width=108&crop=smart&auto=webp&s=220e317b01052edf2315c45470afafcb69ba4f39', 'width': 108}, {'height': 108, 'url': 'h... |
WHAT SHOULD I USE? | 0 | have bunch of documents that have this grid like formation and i wanted to build a script to extract the info in json format 1.B,D 2.B 3. A,B,E.....etc tried all the ai models basically tried multiple ocr tools tesseract kraken i even tried Docling but i couldnt get it to work any suggestions? thanxs
https://preview.r... | 2025-07-26T23:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ma7oyv/what_should_i_use/ | Champ4real | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma7oyv | false | null | t3_1ma7oyv | /r/LocalLLaMA/comments/1ma7oyv/what_should_i_use/ | false | false | 0 | null | |
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 by deepsek · Pull Request #14624 · ggml-org/llama.cpp | 8 | Improved performance on AMD GPUs in llama.cpp | 2025-07-26T22:42:11 | https://github.com/ggml-org/llama.cpp/pull/14624 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ma6igb | false | null | t3_1ma6igb | /r/LocalLLaMA/comments/1ma6igb/hip_enable_matrix_cores_for_mmq_kernels_enable/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': '7Doe4o1YnyO5Jmt3Zj55VClABNt6WhpoynjaS-g_S8M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Doe4o1YnyO5Jmt3Zj55VClABNt6WhpoynjaS-g_S8M.png?width=108&crop=smart&auto=webp&s=a137f9385d081fc502fa321cd23f2304da161a9b', 'width': 108}, {'height': 108, 'url': 'h... |
FULL Lovable Agent System Prompt and Tools [UPDATED] | 0 | (Latest update: 27/07/2025)
I've just extracted the FULL Lovable Agent system prompt and internal tools (Latest update). Over 600 lines (Around 10k tokens).
You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools/tree/main/Lovable](https://github.com/x1xhlol/system-prompts-and-mod... | 2025-07-26T22:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ma6fnl/full_lovable_agent_system_prompt_and_tools_updated/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma6fnl | false | null | t3_1ma6fnl | /r/LocalLLaMA/comments/1ma6fnl/full_lovable_agent_system_prompt_and_tools_updated/ | false | false | self | 0 | null |
Chatterbox Tts python version | 1 | My question is what version of my python does chatter tts need to run correctly. I think I saw somewhere saying it needs version 3.10.8 but I also have stable diffusion running on my computer which becomes buggy if I change from 3.10.6. Would chatterbox still function fine on 3.10.6 or would I need to change it | 2025-07-26T22:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ma6b7j/chatterbox_tts_python_version/ | StrangeMan060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma6b7j | false | null | t3_1ma6b7j | /r/LocalLLaMA/comments/1ma6b7j/chatterbox_tts_python_version/ | false | false | self | 1 | null |
New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples | 445 | What are people's thoughts on Sapient Intelligence's recent paper? Apparently, they developed a new architecture called Hierarchical Reasoning Model (HRM) that performs as well as LLMs on complex reasoning tasks with significantly less training samples and examples. | 2025-07-26T22:32:47 | https://venturebeat.com/ai/new-ai-architecture-delivers-100x-faster-reasoning-than-llms-with-just-1000-training-examples/ | Accomplished-Copy332 | venturebeat.com | 1970-01-01T00:00:00 | 0 | {} | 1ma6b57 | false | null | t3_1ma6b57 | /r/LocalLLaMA/comments/1ma6b57/new_ai_architecture_delivers_100x_faster/ | false | false | default | 445 | {'enabled': False, 'images': [{'id': 'eVOwhU3sAnTrs2xqUPBQNAY5Bs-WJtSTMywCJfCc4LM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eVOwhU3sAnTrs2xqUPBQNAY5Bs-WJtSTMywCJfCc4LM.png?width=108&crop=smart&auto=webp&s=5a6925cbb0e94e0ba7147f6bddbacbfbacabb3ba', 'width': 108}, {'height': 121, 'url': 'h... |
VRAM sweet spot | 3 | What is the vram sweet spot these days? 48gb was for a while, but now I've seen different numbers being posted. Curious what others think. I think its still the 24 to 48gb range, but depends how you are going to use it.
To keep it simple, let's look at just inference. Training obviously needs as much vram as possible. | 2025-07-26T22:17:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ma5yw4/vram_sweet_spot/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma5yw4 | false | null | t3_1ma5yw4 | /r/LocalLLaMA/comments/1ma5yw4/vram_sweet_spot/ | false | false | self | 3 | null |
AMD MI50 @ 100€ | 0 | That's seems like good bang/buck, BUT
I am not knowledgeble about the limitations of these cards.
What works, what doesn't?
Drivers available, etc.
On what kind of platform could I use how many of these? | 2025-07-26T22:14:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ma5wq2/amd_mi50_100/ | BrainOnLoan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma5wq2 | false | null | t3_1ma5wq2 | /r/LocalLLaMA/comments/1ma5wq2/amd_mi50_100/ | false | false | self | 0 | null |
FULL Lovable Agent System Prompt and Tools [UPDATED] | 2 | (Latest update: 27/07/2025)
I've just extracted the FULL Lovable Agent system prompt and internal tools (Latest update). Over 600 lines (Around 10k tokens).
You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools/tree/main/Lovable](https://github.com/x1xhlol/system-prompts-and-mod... | 2025-07-26T22:05:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ma5pa3/full_lovable_agent_system_prompt_and_tools_updated/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma5pa3 | false | null | t3_1ma5pa3 | /r/LocalLLaMA/comments/1ma5pa3/full_lovable_agent_system_prompt_and_tools_updated/ | false | false | self | 2 | null |
It is cool to see an youtuber using huggingface to be funny. Another win for the open-source community | 0 | 2025-07-26T21:51:51 | https://youtu.be/OR-I8DHeB6s?si=vrPWnF6mm8dF_mNm | Time_Dust_2303 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ma5duc | false | {'oembed': {'author_name': 'Steve Terreberry', 'author_url': 'https://www.youtube.com/@SteveTerreberry', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/OR-I8DHeB6s?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media;... | t3_1ma5duc | /r/LocalLLaMA/comments/1ma5duc/it_is_cool_to_see_an_youtuber_using_huggingface/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'WuPpnx0Wgsm7i38gdpWgNwKHsm8MC07n3DKxKYWt0NQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/WuPpnx0Wgsm7i38gdpWgNwKHsm8MC07n3DKxKYWt0NQ.jpeg?width=108&crop=smart&auto=webp&s=d8d15f15d2fc66aad94f19ac44b46dd9996813e9', 'width': 108}, {'height': 162, 'url': '... | |
Qwen3 235b 0725 uses a whole lot of tokens | 0 | 2025-07-26T21:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ma5dmr/qwen3_235b_0725_uses_a_whole_lot_of_tokens/ | GenLabsAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma5dmr | false | null | t3_1ma5dmr | /r/LocalLLaMA/comments/1ma5dmr/qwen3_235b_0725_uses_a_whole_lot_of_tokens/ | false | false | 0 | null | ||
new to all this, best local llm for multilingual (dutch) | 2 | I just hosted a mistral model for the first time. tried to havei t speak dutch and it hallucinated a lot of words and grammar. what model would be a bit more seamless when instructed to speak other languages similar to gpt 4o/claude etc? | 2025-07-26T21:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ma5359/new_to_all_this_best_local_llm_for_multilingual/ | Internal_Patience297 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma5359 | false | null | t3_1ma5359 | /r/LocalLLaMA/comments/1ma5359/new_to_all_this_best_local_llm_for_multilingual/ | false | false | self | 2 | null |
Found this Context Engineering repository - looking for feedback on the approach | 0 | Came across this repository that's trying to unify different AI context management systems: [https://github.com/pranav-tandon/ContextEngineering](https://github.com/pranav-tandon/ContextEngineering)
From what I understand, it's attempting to bring together:
* RAG (with both vector stores and knowledge graphs)
* Anthr... | 2025-07-26T21:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ma4ugf/found_this_context_engineering_repository_looking/ | TadpoleNorth1773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma4ugf | false | null | t3_1ma4ugf | /r/LocalLLaMA/comments/1ma4ugf/found_this_context_engineering_repository_looking/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '_qrC1a29yuOA7fG2zQyswOiKauAxVgYm6fvC2WYxHMA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_qrC1a29yuOA7fG2zQyswOiKauAxVgYm6fvC2WYxHMA.png?width=108&crop=smart&auto=webp&s=81ea6df31a3370955768c01ec9ef2c126dcb7250', 'width': 108}, {'height': 108, 'url': 'h... |
How to handle different input types | 0 | I am working on a chatbot system that offers different services & one of the things I am wondering about is how different input files/type are handled? for example, I want my agent to handle different kinds of files (docx, pdf, excel, pngs,...) and in different quantities (for example, the user uploads a folder of file... | 2025-07-26T21:21:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ma4oqz/how_to_handle_different_input_types/ | Worldly-Algae7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma4oqz | false | null | t3_1ma4oqz | /r/LocalLLaMA/comments/1ma4oqz/how_to_handle_different_input_types/ | false | false | self | 0 | null |
Beginner suggestions | 2 | I'm a beginner to all this but I want to practice fine tuning and gaining general knowledge on local ai overall dose anyone have any suggestions on where to learn? Or if there's someone with experience that's willing to share general insights it would be greatly appreciated | 2025-07-26T20:54:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ma426d/beginner_suggestions/ | Budget-Management698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma426d | false | null | t3_1ma426d | /r/LocalLLaMA/comments/1ma426d/beginner_suggestions/ | false | false | self | 2 | null |
ik_llama.cpp help! | 3 | I'm trying to test out iklcpp and the new Qwen3 235B non thinking. I'm using Unsloth UD-Q4_K_XL quant. My system is 64Gb DDR4 ram and 2x 16Gb GPUs. I have previously tested this split gguf with latest release of koboldcpp. But with iklcpp, I'm getting memory allocation failure.
Basically I'm using mmap as I don't have... | 2025-07-26T20:54:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ma41wu/ik_llamacpp_help/ | lacerating_aura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma41wu | false | null | t3_1ma41wu | /r/LocalLLaMA/comments/1ma41wu/ik_llamacpp_help/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'd-Wkvq34ZRzACVPwHO_DAPDTtuOd0WoROIS_xIxGbqQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d-Wkvq34ZRzACVPwHO_DAPDTtuOd0WoROIS_xIxGbqQ.png?width=108&crop=smart&auto=webp&s=2f9eee7355afb465bfdb476b246b097f0a0153b4', 'width': 108}, {'height': 108, 'url': 'h... |
Strategy for patching llama.cpp webui - and keeping it patched? | 10 | First of all, the webui of llama.cpp has improved - thank you to all the web wizards doing this!
However, there are a few annoyances I want to change. For example, the chat windows has a limited width, meaning long generated code is wrapped and hard to read. Ok, I found in index.scss:
.chat-screen {
max-wid... | 2025-07-26T20:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ma3yps/strategy_for_patching_llamacpp_webui_and_keeping/ | a_postgres_situation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma3yps | false | null | t3_1ma3yps | /r/LocalLLaMA/comments/1ma3yps/strategy_for_patching_llamacpp_webui_and_keeping/ | false | false | self | 10 | null |
In Tribute to the Prince of Darkness: I Benchmarked 19 LLMs on Retrieving "Bark at the Moon" Lyrics | 24 | Hey everyone,
With the recent, heartbreaking news of Ozzy Osbourne's passing, I wanted to share a small project I did that, in its own way, pays tribute to his massive legacy.\[[1](https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQGYe9VhHbzGzzG80-4PJjqJNy2... | 2025-07-26T20:47:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ma3vpa/in_tribute_to_the_prince_of_darkness_i/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma3vpa | false | null | t3_1ma3vpa | /r/LocalLLaMA/comments/1ma3vpa/in_tribute_to_the_prince_of_darkness_i/ | false | false | 24 | null | |
Tool calling support in Llama 3 8b | 1 | Hello guys,
So I have been developing a NL to SQL multi agent system using langgraph and llama 3:8b.
Lately I read at some places and the official docs that 8b version is not capable of maitaining regular conversations with tool calling.
I need some suggestions on if I should use any other version of llama which ... | 2025-07-26T20:30:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ma3hmd/tool_calling_support_in_llama_3_8b/ | codingpinscher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma3hmd | false | null | t3_1ma3hmd | /r/LocalLLaMA/comments/1ma3hmd/tool_calling_support_in_llama_3_8b/ | false | false | self | 1 | null |
Local dual 5060 ti, qwen 3 30b full context of 40k, >60t/s | 11 | Hello all
I wanted to do a write up of my setup for anyone considering a similar choice. I know that it is not actually that cheap, but I think I get a good performance benefit. I live near a microcenter so a lot of this was purchased there.
I got the 7600x3d deal they have but with the boost to 64 gb or ram. then I... | 2025-07-26T20:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ma3dpd/local_dual_5060_ti_qwen_3_30b_full_context_of_40k/ | see_spot_ruminate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma3dpd | false | null | t3_1ma3dpd | /r/LocalLLaMA/comments/1ma3dpd/local_dual_5060_ti_qwen_3_30b_full_context_of_40k/ | false | false | self | 11 | null |
Task for python dev | 0 | Hello 🤗 friends!
I have a rig with 1TB RAM and one A100 80 GB. What task would you assign to a couple of python programmers, who doesn't have any idea about ML/LLMs, for 2 weeks to complete or to gain new skill/knowledge? | 2025-07-26T20:04:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ma2vjq/task_for_python_dev/ | GoldCompetition7722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma2vjq | false | null | t3_1ma2vjq | /r/LocalLLaMA/comments/1ma2vjq/task_for_python_dev/ | false | false | self | 0 | null |
Now you can pull LLM models directly from the browser using XandAI extension | 4 | I've been working on a extension that Allows you to use your LLM from any page on the browser, now I added the capability of pulling and deleting models directly from the browser
If you want to help me or star my project here is the link (100% open-source):
[https://github.com/Aletech-Solutions/XandAI-Extension](htt... | 2025-07-26T19:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ma2r4d/now_you_can_pull_llm_models_directly_from_the/ | Sea-Reception-2697 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma2r4d | false | null | t3_1ma2r4d | /r/LocalLLaMA/comments/1ma2r4d/now_you_can_pull_llm_models_directly_from_the/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'yp9kcsLE0b5eEfAA34Kw6OoIldEMhA7bxI4wXZrUv18', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yp9kcsLE0b5eEfAA34Kw6OoIldEMhA7bxI4wXZrUv18.png?width=108&crop=smart&auto=webp&s=3307ff783a901de9a927798ebf0e50d1d99c17fd', 'width': 108}, {'height': 108, 'url': 'h... |
Anyone else starting to feel this way when a new model 'breaks the charts' but need like 15k thinking tokens to do it? | 238 | 2025-07-26T19:49:41 | ForsookComparison | c.tenor.com | 1970-01-01T00:00:00 | 0 | {} | 1ma2j62 | false | null | t3_1ma2j62 | /r/LocalLLaMA/comments/1ma2j62/anyone_else_starting_to_feel_this_way_when_a_new/ | false | false | 238 | {'enabled': True, 'images': [{'id': 'FA18ZBqfDq7vWHLha20MJmWGIJCU3yPV68Gbmg7jV7s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FA18ZBqfDq7vWHLha20MJmWGIJCU3yPV68Gbmg7jV7s.gif?width=108&crop=smart&format=png8&s=c2a369b7162c594e361e845f2fc32276e74a79c4', 'width': 108}, {'height': 216, 'url': ... | |||
Claude Code Full System prompt | 125 | Someone hacked our Portkey, and Okay, this is wild: our Portkey logs just coughed up the entire system prompt + live session history for Claude Code 🤯 | 2025-07-26T19:40:04 | https://github.com/kn1026/cc/blob/main/claudecode.md | Haunting_Forever_243 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ma2ayu | false | null | t3_1ma2ayu | /r/LocalLLaMA/comments/1ma2ayu/claude_code_full_system_prompt/ | false | false | 125 | {'enabled': False, 'images': [{'id': 'pu2JHfBlmjPIdxzwsvEnAyvx8pP2RonQunTcKJ28dB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pu2JHfBlmjPIdxzwsvEnAyvx8pP2RonQunTcKJ28dB8.png?width=108&crop=smart&auto=webp&s=7af339a9d5dde7cda6aed95a03bb236b15425697', 'width': 108}, {'height': 108, 'url': 'h... | |
Claude Code system prompt | 3 | 2025-07-26T19:24:19 | Haunting_Forever_243 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ma1xjl | false | null | t3_1ma1xjl | /r/LocalLLaMA/comments/1ma1xjl/claude_code_system_prompt/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'sp96nbcrr9ff1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/sp96nbcrr9ff1.png?width=108&crop=smart&auto=webp&s=5e1fa86579e16a44f38575309e28ed25e76fda6a', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/sp96nbcrr9ff1.png?width=216&crop=smart&auto=web... | ||
Any new OpenSource LLM apps or websites? Such as Qwen or Deepseek? | 5 | I think I'm missing some, thanks | 2025-07-26T18:58:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ma1amq/any_new_opensource_llm_apps_or_websites_such_as/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma1amq | false | null | t3_1ma1amq | /r/LocalLLaMA/comments/1ma1amq/any_new_opensource_llm_apps_or_websites_such_as/ | false | false | self | 5 | null |
Would you kindly help | 0 | I am not program and have zero coding knowledge i only build stuff using YouTube and help code like google studio,cursor.
I don't know exactly what to search to find video tutorial about this simple idea:
Ai chat like chatgpt,gimini etc that only answer for my pdf file and i want to deploy it on my website.
Please... | 2025-07-26T18:48:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ma12o2/would_you_kindly_help/ | fasto14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma12o2 | false | null | t3_1ma12o2 | /r/LocalLLaMA/comments/1ma12o2/would_you_kindly_help/ | false | false | self | 0 | null |
Appreciation Post - Thank you unsloth team, and thank you bartowski | 641 | Thank you so much getting ggufs baked and delivered. It must have been busy last few days. How is it looking behind the scenes? | 2025-07-26T18:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ma08e0/appreciation_post_thank_you_unsloth_team_and/ | fuutott | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ma08e0 | false | null | t3_1ma08e0 | /r/LocalLLaMA/comments/1ma08e0/appreciation_post_thank_you_unsloth_team_and/ | false | false | self | 641 | null |
Would this B760M motherboard support dual 2-slot GPUs? | 5 | 2025-07-26T17:55:09 | legit_split_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9zrmo | false | null | t3_1m9zrmo | /r/LocalLLaMA/comments/1m9zrmo/would_this_b760m_motherboard_support_dual_2slot/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': '4p4vl0xub9ff1', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/4p4vl0xub9ff1.png?width=108&crop=smart&auto=webp&s=e758624e57207437dd6e47fee037786d773d7623', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/4p4vl0xub9ff1.png?width=216&crop=smart&auto=we... | ||
Qwen/Alibaba Paper - Group Sequence Policy Optimization | 75 | * This paper introduces Group Sequence Policy Optimization (GSPO), our stable, efficient, and performant reinforcement learning algorithm for training large language models. Unlike previous algorithms that adopt token-level importance ratios, GSPO defines the importance ratio based on sequence likelihood and performs s... | 2025-07-26T17:19:35 | https://arxiv.org/abs/2507.18071 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1m9ywng | false | null | t3_1m9ywng | /r/LocalLLaMA/comments/1m9ywng/qwenalibaba_paper_group_sequence_policy/ | false | false | default | 75 | null |
Free Qwen Code to speedup local work | 0 | So this is pretty neat. You can get Qwen code for free (the qwen version of claude code).
Install it then point it at openrouters free version of Qwen Coder, for completely free you get 50 requests a day. If you have $10 with them you get 1000 free requests a day.
I've been able to troubleshoot local LLM setup stuf... | 2025-07-26T17:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m9yhcd/free_qwen_code_to_speedup_local_work/ | I-cant_even | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9yhcd | false | null | t3_1m9yhcd | /r/LocalLLaMA/comments/1m9yhcd/free_qwen_code_to_speedup_local_work/ | false | false | self | 0 | null |
Databricks | 0 | I was reading the Databricks article on function calling ([https://docs.databricks.com/aws/en/machine-learning/model-serving/function-calling#limitations](https://docs.databricks.com/aws/en/machine-learning/model-serving/function-calling#limitations)) and noticed two main limitations:
* Multi-turn function calling is ... | 2025-07-26T16:55:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m9yaku/databricks/ | Grand_Internet7254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9yaku | false | null | t3_1m9yaku | /r/LocalLLaMA/comments/1m9yaku/databricks/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'o43qrM26RouAumS7K9OJYHQ1dEcdi9Zde9EzVhtRJew', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/o43qrM26RouAumS7K9OJYHQ1dEcdi9Zde9EzVhtRJew.png?width=108&crop=smart&auto=webp&s=8d49de4c7f610703ab1e3a00e5257abfa9646201', 'width': 108}, {'height': 113, 'url': 'h... |
I built a local-first transcribing + summarizing tool that's FREE FOREVER | 61 | Hey all,
I built a macOS app called [Hyprnote](https://hyprnote.com/) \- it’s an AI-powered notepad that listens during meetings and turns your rough notes into clean, structured summaries. Everything runs locally on your Mac, so no data ever leaves your device. We even trained our own LLM for this.
We used to manual... | 2025-07-26T16:49:15 | beerbellyman4vr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9y5cd | false | null | t3_1m9y5cd | /r/LocalLLaMA/comments/1m9y5cd/i_built_a_localfirst_transcribing_summarizing/ | false | false | default | 61 | {'enabled': True, 'images': [{'id': '8e5rt1f209ff1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/8e5rt1f209ff1.jpeg?width=108&crop=smart&auto=webp&s=4874127c0e1630d96aac6797050fca479bc1ad87', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/8e5rt1f209ff1.jpeg?width=216&crop=smart&auto=w... | |
inclusionAI/Ling-lite-1.5-2506 (16.8B total, 2.75B active, MIT license) | 104 | From the Readme: “We are excited to introduce Ling-lite-1.5-2506, the updated version of our highly capable Ling-lite-1.5 model.
Ling-lite-1.5-2506 boasts 16.8 billion parameters with 2.75 billion activated parameters, building upon its predecessor with significant advancements across the board, featuring the followin... | 2025-07-26T16:48:55 | https://huggingface.co/inclusionAI/Ling-lite-1.5-2506 | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m9y506 | false | null | t3_1m9y506 | /r/LocalLLaMA/comments/1m9y506/inclusionailinglite152506_168b_total_275b_active/ | false | false | default | 104 | {'enabled': False, 'images': [{'id': 'e7ctnhD9fGClQAWGTRPiUR684S9oQO734fubNQzMy7w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e7ctnhD9fGClQAWGTRPiUR684S9oQO734fubNQzMy7w.png?width=108&crop=smart&auto=webp&s=8296a7257d0017a5ea7dbf418ca4a2ddfb9e318d', 'width': 108}, {'height': 116, 'url': 'h... |
I built a local-first transcribing + summarizing tool that's FREE FOREVER | 1 | [deleted] | 2025-07-26T16:48:14 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m9y4ey | false | null | t3_1m9y4ey | /r/LocalLLaMA/comments/1m9y4ey/i_built_a_localfirst_transcribing_summarizing/ | false | false | default | 1 | null | ||
Chatterbox multi hour generator | 19 |
I created an audiobook generator https://github.com/Jeremy-Harper/chatterboxPro
I’m at the point I’ve started to wire in the llama calls to start making the system smarter. I’m thinking being able to flag chapters without having them need to be in a “chapter #” format, being able to rewrite failed attempts so that it... | 2025-07-26T16:40:17 | Upbeat5840 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9xx6w | false | null | t3_1m9xx6w | /r/LocalLLaMA/comments/1m9xx6w/chatterbox_multi_hour_generator/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': '4itbo3xjy8ff1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/4itbo3xjy8ff1.jpeg?width=108&crop=smart&auto=webp&s=8c92003d58027207c4a0ec4ff020bf077fab3271', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/4itbo3xjy8ff1.jpeg?width=216&crop=smart&auto=w... | |
For MCP is LMstudio or Ollama better? | 1 | Or do both of them work great with all mcp servers? I have only really used mcp with claude desktop, and I especially like the knowledge graph memory server | 2025-07-26T16:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m9xwo5/for_mcp_is_lmstudio_or_ollama_better/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9xwo5 | false | null | t3_1m9xwo5 | /r/LocalLLaMA/comments/1m9xwo5/for_mcp_is_lmstudio_or_ollama_better/ | false | false | self | 1 | null |
Crediting Chinese makers by name | 350 | I often see products put out by makers in China posted here as "China does X", either with or sometimes even without the maker being mentioned. Some examples:
* [https://www.reddit.com/r/LocalLLaMA/comments/1m9tyg9/is\_china\_the\_only\_hope\_for\_factual\_models/](https://www.reddit.com/r/LocalLLaMA/comments/1m9tyg9/... | 2025-07-26T16:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m9xw4c/crediting_chinese_makers_by_name/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9xw4c | false | null | t3_1m9xw4c | /r/LocalLLaMA/comments/1m9xw4c/crediting_chinese_makers_by_name/ | false | false | self | 350 | null |
Implemented Test-Time Diffusion Deep Researcher (TTD-DR) - Turn any local LLM into a powerful research agent with real web sources | 38 | Hey r/LocalLLaMA !
I wanted to share our implementation of TTD-DR (Test-Time Diffusion Deep Researcher) in OptILLM. This is particularly exciting for the local LLM community because it works with ANY OpenAI-compatible model - including your local llama.cpp, Ollama, or vLLM setups!
# What is TTD-DR?
TTD-DR is a cleve... | 2025-07-26T16:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m9xi84/implemented_testtime_diffusion_deep_researcher/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9xi84 | false | null | t3_1m9xi84 | /r/LocalLLaMA/comments/1m9xi84/implemented_testtime_diffusion_deep_researcher/ | false | false | self | 38 | null |
one week ago I thought pip was the name of my laptop | 0 | I ran phi 2 on my i3-N305 with the help of chat gpt guiding me through what the code means and stuff, I then found a better laptop and upgraded it and I will be moving everything over to the better one, i dont code, the most nerdy techy stuff ive done before this is install ios on a galaxy s3 back in the day, i am a st... | 2025-07-26T16:05:21 | https://www.reddit.com/gallery/1m9x2j6 | ciwwa | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m9x2j6 | false | null | t3_1m9x2j6 | /r/LocalLLaMA/comments/1m9x2j6/one_week_ago_i_thought_pip_was_the_name_of_my/ | false | false | 0 | null | |
Tencent launched AI Coder IDE CodeBuddy | 30 | 2025-07-26T16:00:21 | https://www.codebuddy.ai/ | Fun-Doctor6855 | codebuddy.ai | 1970-01-01T00:00:00 | 0 | {} | 1m9wxow | false | null | t3_1m9wxow | /r/LocalLLaMA/comments/1m9wxow/tencent_launched_ai_coder_ide_codebuddy/ | false | false | default | 30 | {'enabled': False, 'images': [{'id': '7OUcTculgBAh3sNdUJnpue5Qh2CrA7qBTTYva5FlcJE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7OUcTculgBAh3sNdUJnpue5Qh2CrA7qBTTYva5FlcJE.png?width=108&crop=smart&auto=webp&s=ac1aaec185a5b9968e98d912f7b126df608f0c43', 'width': 108}, {'height': 216, 'url': '... | |
My Attempt to Understand local LLM Landscape (Survey Results) | 4 | A few weeks ago (2 weeks), I shared a 23 question survey with my online community. With all the buzz around new model announcements and the "AGI is just around the corner" hype, I wanted to hear directly from people in the field to understand the real picture of Large Language Models (LLMs).
I'm grateful to all **26 p... | 2025-07-26T15:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m9woxb/my_attempt_to_understand_local_llm_landscape/ | kidupstart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9woxb | false | null | t3_1m9woxb | /r/LocalLLaMA/comments/1m9woxb/my_attempt_to_understand_local_llm_landscape/ | false | false | 4 | null | |
Access Llama in CLI with sexy UI ? | 1 | Hello, i use Gemini-Cli in terminal and i love it.
BUT i would like to use it with my llama local, so i search an alternative to use llama in cli with beautifull UI. Do you know a tools to do this ? (i already have openwebui for my wife)
Thanks | 2025-07-26T15:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m9wlhw/access_llama_in_cli_with_sexy_ui/ | ZobiLeFourbe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9wlhw | false | null | t3_1m9wlhw | /r/LocalLLaMA/comments/1m9wlhw/access_llama_in_cli_with_sexy_ui/ | false | false | self | 1 | null |
Does anyone know how to decrease the speaking rate in ChatterboxTTs-Extented? | 1 | I see CFG/Pace, but it didn't seem to reduce the speaking rate by that much. The audio always seems to go way too quickly for me. Is there a certain syntax I can type in the dialogue box that will signfy pauses? | 2025-07-26T15:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m9wdxb/does_anyone_know_how_to_decrease_the_speaking/ | Mahtlahtli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9wdxb | false | null | t3_1m9wdxb | /r/LocalLLaMA/comments/1m9wdxb/does_anyone_know_how_to_decrease_the_speaking/ | false | false | self | 1 | null |
Best way (if there is one) to run GLM-4.1V-9B-Thinking with vision on Windows? | 3 | - llama.cpp (and this koboldcpp, ollama, lmstudio, etc) only support text at the moment
- vLLM does not support Windows, and I'm not keen on trying my luck with WSL2
- Reference implementation is based on Transformers, so it's probably slow and without OpenAI compatible API, plus I'm not a fan of having to install al... | 2025-07-26T15:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m9wcmx/best_way_if_there_is_one_to_run_glm41v9bthinking/ | nmkd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9wcmx | false | null | t3_1m9wcmx | /r/LocalLLaMA/comments/1m9wcmx/best_way_if_there_is_one_to_run_glm41v9bthinking/ | false | false | self | 3 | null |
HP Zbook Ultra G1A pp512/tg128 scores for unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF 128gb unified RAM | 38 | I know there's people evaluating these unified memory laptops with strix halo, and thought i'd share this score of one of the most powerful recent models I've been able to fully run on this in it's GPU memory. | 2025-07-26T15:36:20 | richardanaya | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9wcdc | false | null | t3_1m9wcdc | /r/LocalLLaMA/comments/1m9wcdc/hp_zbook_ultra_g1a_pp512tg128_scores_for/ | false | false | 38 | {'enabled': True, 'images': [{'id': 'hTI02BtcwQ3ImvqcxKtO1KJp0NgCbHF7uxJBlwhB9lM', 'resolutions': [{'height': 16, 'url': 'https://preview.redd.it/civzaw3fm8ff1.png?width=108&crop=smart&auto=webp&s=ab7f0602ba5ff034d00d05ebfec9dd9a987d804c', 'width': 108}, {'height': 33, 'url': 'https://preview.redd.it/civzaw3fm8ff1.png?... | ||
What are you using to access your local LLMs when you're out and about? | 1 | [removed] | 2025-07-26T15:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m9wc6h/what_are_you_using_to_access_your_local_llms_when/ | anderspitman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9wc6h | false | null | t3_1m9wc6h | /r/LocalLLaMA/comments/1m9wc6h/what_are_you_using_to_access_your_local_llms_when/ | false | false | self | 1 | null |
Open source AI presentation generator with custom layouts support for custom presentation design | 20 | Presenton, the open source AI presentation generator that can run locally over Ollama.
Presenton now supports custom AI layouts. Create custom templates with HTML, Tailwind and Zod for schema. Then, use it to create presentations over AI.
We've added a lot more improvements with this release on Presenton:
* Stunning... | 2025-07-26T15:22:48 | goodboydhrn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9w0k8 | false | null | t3_1m9w0k8 | /r/LocalLLaMA/comments/1m9w0k8/open_source_ai_presentation_generator_with_custom/ | false | false | default | 20 | {'enabled': True, 'images': [{'id': '64z0fr7dj8ff1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/64z0fr7dj8ff1.gif?width=108&crop=smart&format=png8&s=cf16b61187ee06e473e1e9ff022978c1717ff4a8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/64z0fr7dj8ff1.gif?width=216&crop=smart&format... | |
Does anyone know how to get an Nvidia RTX 4090 (gigabyte) to work in a headless server | 0 | System:
AMD Epyc 9354p cpu
Supermicro H13SSL-N motherboard
Headless server
Fans spin upon turning on. GPU recognized with Noveau, but fails to work with Noveau blacklisted and NVIDIA drivers.
Happy to have Cursor Claude provide more details if needed since it couldn’t figure it out yet. | 2025-07-26T15:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m9vr84/does_anyone_know_how_to_get_an_nvidia_rtx_4090/ | Fearless-Image-1421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9vr84 | false | null | t3_1m9vr84 | /r/LocalLLaMA/comments/1m9vr84/does_anyone_know_how_to_get_an_nvidia_rtx_4090/ | false | false | self | 0 | null |
BEST AND FREE AI AGENT [FREE] | 0 | BEST AI FOR CODERS
USE THE LINK BLOW TO GET FREE 500 CREDITS
LINK https://manus.im/invitation/BHUMII6RKMMR | 2025-07-26T15:07:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m9vn5j/best_and_free_ai_agent_free/ | CompetitiveMaize5127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9vn5j | false | null | t3_1m9vn5j | /r/LocalLLaMA/comments/1m9vn5j/best_and_free_ai_agent_free/ | false | false | self | 0 | null |
The few guessers still believe DeepSeek will trump Qwen | 0 | [https://manifold.markets/Sss19971997/best-open-weight-llm-by-eoy-2025-by?r=TWFydGluVmxhY2g](https://manifold.markets/Sss19971997/best-open-weight-llm-by-eoy-2025-by?r=TWFydGluVmxhY2g) , adding signal welcome. | 2025-07-26T15:06:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m9vmdh/the_few_guessers_still_believe_deepseek_will/ | uhuge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9vmdh | false | null | t3_1m9vmdh | /r/LocalLLaMA/comments/1m9vmdh/the_few_guessers_still_believe_deepseek_will/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'WzweJxnI6C6xkv61kPyDoXiohqlAEC2GHEMNOtNIm1I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WzweJxnI6C6xkv61kPyDoXiohqlAEC2GHEMNOtNIm1I.png?width=108&crop=smart&auto=webp&s=9e2a97b64fd6b8a0e983e44e412e4b97d557f957', 'width': 108}, {'height': 113, 'url': 'h... |
BEST AI FOR CODERS | 0 | BEST AI FOR CODERS
USE THE LINK BLOW TO GET FREE 500 CREDITS
LINK https://manus.im/invitation/BHUMII6RKMMR | 2025-07-26T15:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m9vj11/best_ai_for_coders/ | CompetitiveMaize5127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9vj11 | false | null | t3_1m9vj11 | /r/LocalLLaMA/comments/1m9vj11/best_ai_for_coders/ | false | false | self | 0 | null |
LLM Agents - A different example | 0 | Kind of tired with get-weather-api and travel booking example for LLM agents. So wrote this example. Let me know what you guys think. Thanks!! | 2025-07-26T14:54:51 | https://transformersandtheiravatars.substack.com/p/llm-agents-a-different-example | ZucchiniCalm4617 | transformersandtheiravatars.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1m9vbzf | false | null | t3_1m9vbzf | /r/LocalLLaMA/comments/1m9vbzf/llm_agents_a_different_example/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'tqf02y70XtEyyuO1cPsSLIGkOsYhmiLJA6AVH5it3fg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tqf02y70XtEyyuO1cPsSLIGkOsYhmiLJA6AVH5it3fg.jpeg?width=108&crop=smart&auto=webp&s=d98fa4a73d2060d66c20e6f75c798bd3c4e5147d', 'width': 108}, {'height': 108, 'url': '... | |
Quad 4090 48GB + 768GB DDR5 in Jonsbo N5 case | 524 | My own personal desktop workstation.
Specs:
1. GPUs -- Quad 4090 48GB (Roughly 3200 USD each, 450 watts max energy use)
2. CPUs -- Intel 6530 32 Cores Emerald Rapids (1350 USD)
3. Motherboard -- Tyan S5652-2T (836 USD)
4. RAM -- eight sticks of M321RYGA0PB0-CWMKH 96GB (768GB total, 470 USD per stick)
5. Case -- Jonsb... | 2025-07-26T14:37:40 | https://www.reddit.com/gallery/1m9uwxg | 44seconds | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m9uwxg | false | null | t3_1m9uwxg | /r/LocalLLaMA/comments/1m9uwxg/quad_4090_48gb_768gb_ddr5_in_jonsbo_n5_case/ | false | false | 524 | null | |
What's the fastest backend for local long context (100k+)? | 6 | Been out of the scene for the past few months.
Should I use lmstudio? ollama? llamacpp?
Or ik\_llama? vllm? lmdeploy?
I have a 4090 + 96 GB of ram and Ryzen 9 7900 and my goal is to hit 100k context with pp times <5 seconds and models 7B to 32B. Possible? | 2025-07-26T14:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/1m9uoa7/whats_the_fastest_backend_for_local_long_context/ | trithilon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9uoa7 | false | null | t3_1m9uoa7 | /r/LocalLLaMA/comments/1m9uoa7/whats_the_fastest_backend_for_local_long_context/ | false | false | self | 6 | null |
Best non-thinking model which can be a long context personal assistant? | 13 | Been using GPT-4o for most of my daily queries - my main usecase is to map my thoughts, some of this stuff is sensitive so I need a local solution.
I REALLY like the tone of GPT-4o (yeah, I am a sucker for glazing!)
What would be the best model to use for this usecase?
I am thinking 13-32B models which are un... | 2025-07-26T14:22:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m9ukpw/best_nonthinking_model_which_can_be_a_long/ | trithilon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9ukpw | false | null | t3_1m9ukpw | /r/LocalLLaMA/comments/1m9ukpw/best_nonthinking_model_which_can_be_a_long/ | false | false | self | 13 | null |
Anything as fast as Qwen A3B? | 7 | I run a LLM for home use, like sorting big text files. Nothing fancy, just more or less boring administrative stuff. I use Qwen3-30B-A3B-128K-UD-Q6\_K\_XL for this (by Unsloth) on a CPU only environment (Mini PC with Ryzen and 64GB RAM). I can load and use about 55GB of RAM, so eg. a 45GB LLM + 8GB for data aka context... | 2025-07-26T14:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m9ujwe/anything_as_fast_as_qwen_a3b/ | Pogo4Fufu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9ujwe | false | null | t3_1m9ujwe | /r/LocalLLaMA/comments/1m9ujwe/anything_as_fast_as_qwen_a3b/ | false | false | self | 7 | null |
Study reports AI Coding Tools Underperform | 57 | These results resonate with my experience. Sometimes AI is really helpful, sometimes it feels like fixing the code produced by AI and instructing it to do what I want takes more time thatn doing it without AI. What’s your experience? | 2025-07-26T13:57:39 | https://www.infoq.com/news/2025/07/ai-productivity/ | Additional_Cellist46 | infoq.com | 1970-01-01T00:00:00 | 0 | {} | 1m9tzxx | false | null | t3_1m9tzxx | /r/LocalLLaMA/comments/1m9tzxx/study_reports_ai_coding_tools_underperform/ | false | false | default | 57 | {'enabled': False, 'images': [{'id': 'FTknjH5zNA3fuRilaClS5Q8z5FrmNEZ7xvtzwWRBrdk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FTknjH5zNA3fuRilaClS5Q8z5FrmNEZ7xvtzwWRBrdk.jpeg?width=108&crop=smart&auto=webp&s=9975393d84c12f218747a76b7fcbd57fed6b6f85', 'width': 108}, {'height': 113, 'url': '... |
Is China the only hope for factual models? | 30 | I am wondering everyones opinions on truth seeking accurate models that we could have that actually wont self censor somehow, we know that the **Chinese Models** are very very good at not saying anything against the Chinese Government but work great when talking about anything else in western civilization. We also know... | 2025-07-26T13:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m9tyg9/is_china_the_only_hope_for_factual_models/ | Meme_Lord_Musk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9tyg9 | false | null | t3_1m9tyg9 | /r/LocalLLaMA/comments/1m9tyg9/is_china_the_only_hope_for_factual_models/ | false | false | self | 30 | null |
Honest release notes from non-proprietary model developer | 0 | ”Hey, so I developed/forked this new AI model/llm/image/video gen. It’s open source and open weight with a hundred trillion parameters, so you only need like 500xH100 80 GB to run inference, but it’s 100% free, open source and open weight!
It’s also available on hugging face for FREE with a 24h queue time if it works ... | 2025-07-26T13:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m9tv80/honest_release_notes_from_nonproprietary_model/ | AI-On-A-Dime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9tv80 | false | null | t3_1m9tv80 | /r/LocalLLaMA/comments/1m9tv80/honest_release_notes_from_nonproprietary_model/ | false | false | self | 0 | null |
Phi-4-mini-reasoning: An example of "overfitting to think" | 12 | Sometimes, you can overfit a model to think *too* deeply. There seems to be a balance required for a model to break a problem down step-by-step, but not overthink it. I find that Phi-4 is good at problem solving and thinking analytically, but doesn't understand when something *isn't a problem*. Not everything is a prob... | 2025-07-26T13:48:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m9tt3o/phi4minireasoning_an_example_of_overfitting_to/ | a-c-19-23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9tt3o | false | null | t3_1m9tt3o | /r/LocalLLaMA/comments/1m9tt3o/phi4minireasoning_an_example_of_overfitting_to/ | false | false | self | 12 | null |
Scaling Inference To Billions of Users And Agents | 7 | Hey folks,
Just published a deep dive on the full infrastructure stack required to scale LLM inference to billions of users and agents. It goes beyond a single engine and looks at the entire system.
Highlights:
* GKE Inference Gateway: How it cuts tail latency by 60% & boosts throughput 40% with model-aware routing ... | 2025-07-26T13:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m9tnj5/scaling_inference_to_billions_of_users_and_agents/ | m4r1k_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9tnj5 | false | null | t3_1m9tnj5 | /r/LocalLLaMA/comments/1m9tnj5/scaling_inference_to_billions_of_users_and_agents/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '7JS3Ya3b6rSOwFZeMG-XxVZkJJv_W5OFh2T4WbpzeG0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7JS3Ya3b6rSOwFZeMG-XxVZkJJv_W5OFh2T4WbpzeG0.png?width=108&crop=smart&auto=webp&s=58f812f9bd6c9bebf70c1af318f89680a57d02cc', 'width': 108}, {'height': 216, 'url': '... |
Local Machine setup | 2 | Hello all!
im comparativly new to Local AI but im interrested in a Project of mine that would require a locally hosted AI for inference based on alot of Files with RAG. (or at least that how i envision it at the moment)
the usecase would be to automatically create "summaries" based on the Files in RAG. So no chat and... | 2025-07-26T13:34:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m9thq6/local_machine_setup/ | Bloodorem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9thq6 | false | null | t3_1m9thq6 | /r/LocalLLaMA/comments/1m9thq6/local_machine_setup/ | false | false | self | 2 | null |
I get "No LLMS yet" error even tho I have an LLM in LM Studio | 0 | Hello, the problem is like I said in the title.
I downloaded DeepSeek R1, specificly this: deepseek/deepseek-r1-0528-qwen3-8b
Then I tried to load in, but the app says There's no LLMs yet, and ask me to download. Even tho I already downloaded the DeepSeek. I check the files and it's there. I also check the "My M... | 2025-07-26T13:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m9tdts/i_get_no_llms_yet_error_even_tho_i_have_an_llm_in/ | aldebaran38 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9tdts | false | null | t3_1m9tdts | /r/LocalLLaMA/comments/1m9tdts/i_get_no_llms_yet_error_even_tho_i_have_an_llm_in/ | false | false | self | 0 | null |
Multimodal RAG | 1 | So what I got from it is multimodal RAG always needs an associated query for an image or a group of images, and the similarity search will always be on these image captions, not the image itself.
Please correct me if I am wrong. | 2025-07-26T13:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m9t4ek/multimodal_rag/ | IndependentTough5729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9t4ek | false | null | t3_1m9t4ek | /r/LocalLLaMA/comments/1m9t4ek/multimodal_rag/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.