title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How does Mistral Medium 3.1 fare? | 7 | Anyone have a chance to try? Curious how it compares to Qwen 3s. | 2025-08-14T07:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mptv5t/how_does_mistral_medium_31_fare/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptv5t | false | null | t3_1mptv5t | /r/LocalLLaMA/comments/1mptv5t/how_does_mistral_medium_31_fare/ | false | false | self | 7 | null |
Hardware suggestion for a local LLM machine | 1 | After having played around with Ollama and OpenWebUI a bit I was thinking to get a bit a beefier setup to speed things up and run larger models.
If you want to reasonably run DeepSeek R1 at 70B what kind of hardware would you need?
Thanks in advance for your replies. | 2025-08-14T07:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mptuq7/hardware_suggestion_for_a_local_llm_machine/ | MatthKarl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptuq7 | false | null | t3_1mptuq7 | /r/LocalLLaMA/comments/1mptuq7/hardware_suggestion_for_a_local_llm_machine/ | false | false | self | 1 | null |
2x 5090 or 4x ? vLLM pcie enough? | 2 | Hi,
Is anyone running 2x or more 5090 in tensor parallel 2 or 4 with pcie 5.0 16x?
Need to known will the pcie bandwidth be a bottleneck? | 2025-08-14T07:12:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mptjpq/2x_5090_or_4x_vllm_pcie_enough/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptjpq | false | null | t3_1mptjpq | /r/LocalLLaMA/comments/1mptjpq/2x_5090_or_4x_vllm_pcie_enough/ | false | false | self | 2 | null |
Building a demo agent w tech like Frigade AI — need advice on the best approach | 1 | I’ve been looking into [Frigade AI](https://frigade.ai/) — they basically crawl SaaS products for days using LLMs, mapping out the semantics and steps of workflows so they can “understand” the product like a human would. After this training, a user can ask for help and the system can walk them through tasks in the prod... | 2025-08-14T07:06:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mptg67/building_a_demo_agent_w_tech_like_frigade_ai_need/ | BulkyAd7044 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptg67 | false | null | t3_1mptg67 | /r/LocalLLaMA/comments/1mptg67/building_a_demo_agent_w_tech_like_frigade_ai_need/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KWI5Hp5bGqiwwcR257LxLrPTaoiAXY3cDEuTNTu4ZU8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KWI5Hp5bGqiwwcR257LxLrPTaoiAXY3cDEuTNTu4ZU8.png?width=108&crop=smart&auto=webp&s=aa24f284691a318cd6b7bc5ce078f8de1fead6ef', 'width': 108}, {'height': 113, 'url': 'h... |
LLM Memory and AI Agent Memory conflicts on naming | 0 | I made a post earlier discussing AI Agent Memory but got negative responses in term of definition of "memory", but its really just a name conflict between the 2 areas(LLM vs Ai Agent),
AI Agent Memory is also widely adopted as mem0, the YC startup has 30k+ stars with such naming [https://github.com/mem0ai/mem0](https... | 2025-08-14T06:59:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mptbpo/llm_memory_and_ai_agent_memory_conflicts_on_naming/ | JadedBlackberry1804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptbpo | false | null | t3_1mptbpo | /r/LocalLLaMA/comments/1mptbpo/llm_memory_and_ai_agent_memory_conflicts_on_naming/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'om8WeuQT5CQaxNQywnMXcH7ze_d6QiAbg062bXlQY0c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/om8WeuQT5CQaxNQywnMXcH7ze_d6QiAbg062bXlQY0c.png?width=108&crop=smart&auto=webp&s=7dd3136b0588347db7dd97828fb6704877d43667', 'width': 108}, {'height': 108, 'url': 'h... |
Can I do training on DGX Spark? | 1 | Our lab is trying to get some new GPUs, should we get DGX Spark? I heard that it is made for inference and not training. Or should we get the RTX 6000 | 2025-08-14T06:40:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mpt04x/can_i_do_training_on_dgx_spark/ | Striking-Warning9533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpt04x | false | null | t3_1mpt04x | /r/LocalLLaMA/comments/1mpt04x/can_i_do_training_on_dgx_spark/ | false | false | self | 1 | null |
The guy getting 15+ downvote for posting on vercel | 0 | I am sorry about not reading vercel's rule carefully, but I really want to let people know that I created a pure llm memory package for managing conversations, hope you find it useful!
[https://github.com/GeLi2001/Memoer](https://github.com/GeLi2001/Memoer) | 2025-08-14T06:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mpsriw/the_guy_getting_15_downvote_for_posting_on_vercel/ | JadedBlackberry1804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpsriw | false | null | t3_1mpsriw | /r/LocalLLaMA/comments/1mpsriw/the_guy_getting_15_downvote_for_posting_on_vercel/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IA74lxjGNNpbytH5f-4i7dRC-P85SujzUn8GMbDKv0I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IA74lxjGNNpbytH5f-4i7dRC-P85SujzUn8GMbDKv0I.png?width=108&crop=smart&auto=webp&s=a6273c7150e2b4d6552638c78ef4f26ee11b43eb', 'width': 108}, {'height': 108, 'url': 'h... |
Made a 64gb Vram machine, what models should I run? | 5 | I recently got all the parts and built a pc with a ryzen 5 3600, 32 x2 gb of ddr4 3600 ram, and 2 mi50 32gbs.
I'm still looking into how to build llama cpp from scratch and using ubuntu, so right now I'm just running windows 11 with lm studio, Rocm doesnt seem to work with my current vbios so there are still quite a f... | 2025-08-14T06:09:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mpsgk5/made_a_64gb_vram_machine_what_models_should_i_run/ | SensitiveTaro6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpsgk5 | false | null | t3_1mpsgk5 | /r/LocalLLaMA/comments/1mpsgk5/made_a_64gb_vram_machine_what_models_should_i_run/ | false | false | self | 5 | null |
Hey a newbie here, Can anyone suggest me a Voice cloning TTS which can give voice in real time | 0 | I m trying a project where I need to integrate a Voice cloning TTS but which ever I have used till now have given me voices either bad or good but in like 10-15 mins of buffer, but I need something which is faster. Please help! | 2025-08-14T06:03:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mpscw8/hey_a_newbie_here_can_anyone_suggest_me_a_voice/ | 09Concept09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpscw8 | false | null | t3_1mpscw8 | /r/LocalLLaMA/comments/1mpscw8/hey_a_newbie_here_can_anyone_suggest_me_a_voice/ | false | false | self | 0 | null |
lm-studio RAG only does 3 citations? | 2 | I’ve tried lm-studio with a few models and they all only have 3 citations. Is it a setting I need to change? | 2025-08-14T06:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mpscup/lmstudio_rag_only_does_3_citations/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpscup | false | null | t3_1mpscup | /r/LocalLLaMA/comments/1mpscup/lmstudio_rag_only_does_3_citations/ | false | false | self | 2 | null |
What model would you guys recommend for a text extraction task? | 0 | I'm undertaking a project at work that uses an OCR tool to convert PDFs (with either handwritten or typed text) into JSON or Markdown. We then need to extract specific details from the OCR output and further transform that into JSON or append it to a CSV.
The OCR component is already done.
We now need a local model ... | 2025-08-14T05:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mps6vz/what_model_would_you_guys_recommend_for_a_text/ | anirakdream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mps6vz | false | null | t3_1mps6vz | /r/LocalLLaMA/comments/1mps6vz/what_model_would_you_guys_recommend_for_a_text/ | false | false | self | 0 | null |
I created a script that lets models create slideshow presentations and turn it into a video. Works with any openai compatible endpoint (that includes local) | 5 | The above example was created by Qwen 3 Coder. I find GLM 4.5 is really good at this, on par with GPT-5 and Claude Sonnet.
[https://github.com/Rolandjg/GPT-presentation](https://github.com/Rolandjg/GPT-presentation) | 2025-08-14T05:52:55 | https://v.redd.it/sh7eh4f5cxif1 | Orb58 | /r/LocalLLaMA/comments/1mps60q/i_created_a_script_that_lets_models_create/ | 1970-01-01T00:00:00 | 0 | {} | 1mps60q | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sh7eh4f5cxif1/DASHPlaylist.mpd?a=1757872381%2CMDZhMTRhNWIwMDZhNGZjZTNhMDdhNzQ1NzI5Yjk5ZWQ0YTc5ZjNmZjZiMjBkNDQ2YzFhMjBhMWUwOWEyYTU2NA%3D%3D&v=1&f=sd', 'duration': 276, 'fallback_url': 'https://v.redd.it/sh7eh4f5cxif1/DASH_1080.mp4?source=fallback', '... | t3_1mps60q | /r/LocalLLaMA/comments/1mps60q/i_created_a_script_that_lets_models_create/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'MGZ2Z3k0ZjVjeGlmMSzJ9NWc4C-Fz-TqYJyRWMLFOKvzkJ4EDNPsCGIb3GOu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGZ2Z3k0ZjVjeGlmMSzJ9NWc4C-Fz-TqYJyRWMLFOKvzkJ4EDNPsCGIb3GOu.png?width=108&crop=smart&format=pjpg&auto=webp&s=7c6b067f73a821de75d4cfb50511815a61c68... | |
I want to start with finetuning llms, but I am not sure what to fine tune it on.. Any fun ideas to get me hooked ? | 1 | I am into fiction reading. Should I finetune one to write uncensored fiction ? Or anything else ? There are already models which are uncensored though... | 2025-08-14T05:41:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mprygt/i_want_to_start_with_finetuning_llms_but_i_am_not/ | Fast-Cauliflower-331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mprygt | false | null | t3_1mprygt | /r/LocalLLaMA/comments/1mprygt/i_want_to_start_with_finetuning_llms_but_i_am_not/ | false | false | self | 1 | null |
Complete Data Science Roadmap 2025 (Step-by-Step Guide) | 0 | From my own journey breaking into Data Science, I compiled everything I’ve learned into a structured roadmap — covering the essential skills from core Python to ML to advanced Deep Learning, NLP, GenAI, and more.
🔗 [**Data Science Roadmap 2025 🔥 | Step-by-Step Guide to Become a Data Scientist (Beginner to Pr**o)](ht... | 2025-08-14T05:40:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mpry9t/complete_data_science_roadmap_2025_stepbystep/ | SKD_Sumit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpry9t | false | null | t3_1mpry9t | /r/LocalLLaMA/comments/1mpry9t/complete_data_science_roadmap_2025_stepbystep/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hN80amFl5325f4BooLG8vF3hyrlSuyxeP6gE5inPSyA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hN80amFl5325f4BooLG8vF3hyrlSuyxeP6gE5inPSyA.jpeg?width=108&crop=smart&auto=webp&s=2613b9459d2236dc0bb1f4fa754f15e4790a460d', 'width': 108}, {'height': 162, 'url': '... |
Since Design Arena was released 6 weeks ago, DeepSeek R1-0528 has had a foothold in the top 5 despite being open source. How good do you think R2 will be? | 71 | There's rumors that R2 is coming up sometime in the next month. It does feel that the release of the recent proprietary models have been a bit disappointing, given the marginal gains (e.g. on my [frontend benchmark](https://www.designarena.ai/), GPT-5, Opus 4, and 4.1 are basically equivalent though there's a small sam... | 2025-08-14T05:38:43 | Accomplished-Copy332 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mprwv9 | false | null | t3_1mprwv9 | /r/LocalLLaMA/comments/1mprwv9/since_design_arena_was_released_6_weeks_ago/ | false | false | default | 71 | {'enabled': True, 'images': [{'id': 'ydbnycjn8xif1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/ydbnycjn8xif1.png?width=108&crop=smart&auto=webp&s=06c96debb7d301c940e660652eefcab1c63c810f', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/ydbnycjn8xif1.png?width=216&crop=smart&auto=web... | |
Devs: Devstral VS Qwen3-30b/GPT-OSS? | 12 | I’m just reaching out for anyone with first hand experience in real world coding tasks between the dense devstral small and the light MOE.
I know there’s benchmarks but real world experience tends to be better. If you’ve played both both what’s your advice? Mainly python and some JS stuff.
Tooling support would be c... | 2025-08-14T05:04:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mprb4d/devs_devstral_vs_qwen330bgptoss/ | mitchins-au | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mprb4d | false | null | t3_1mprb4d | /r/LocalLLaMA/comments/1mprb4d/devs_devstral_vs_qwen330bgptoss/ | false | false | self | 12 | null |
Who are the 57 million people who downloaded bert last month? | 366 | 2025-08-14T04:49:28 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpr0nc | false | null | t3_1mpr0nc | /r/LocalLLaMA/comments/1mpr0nc/who_are_the_57_million_people_who_downloaded_bert/ | false | false | 366 | {'enabled': True, 'images': [{'id': '_U43MD8GaN9kCCs7ELOUu8InUOTf-0YEyRSrzAs6wKw', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/vk2njmk01xif1.png?width=108&crop=smart&auto=webp&s=b8916f223ee84f1e5bfe062fb8e189f176e2f662', 'width': 108}, {'height': 43, 'url': 'https://preview.redd.it/vk2njmk01xif1.png?... | |||
Models tested on the RX 7900 XT in LM Studio | 5 | |Model Name|Prompt|tok/sec|Tokens Count|Count to First Token|Reasoning Effort|Quantization|Model Type|
|:-|:-|:-|:-|:-|:-|:-|:-|
|Qwen3 4B|Tell me a story|161.85|952|0.01s|None|Q4\_K\_M|Dense|
|GPT-OSS 20B|//|106.84|855|0.10s|Low|MXFP4|MoE (8 Experts)|
|GPT-OSS 20B|//|104.32|1678|0.10s|Medium|MXFP4|MoE (8 Experts)|
|GP... | 2025-08-14T04:42:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mpqw3n/models_tested_on_the_rx_7900_xt_in_lm_studio/ | Reaper_9382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpqw3n | false | null | t3_1mpqw3n | /r/LocalLLaMA/comments/1mpqw3n/models_tested_on_the_rx_7900_xt_in_lm_studio/ | false | false | self | 5 | null |
ROCM vs VULKAN FOR AMD GPU (RX7800XT) | 1 | I have been using **lm studio** in **ubuntu 24.04.2 desktop** with my **RX7800XT** GPU (16GB VRAM).
I found that **llama.cpp vulkan** runtime is gives me better inference speed.
I tried with **llama.cpp rocm** runtime and only got better speed on IBM's "**granite 4.0 tiny preview**" than vulkan.
Are you using vulka... | 2025-08-14T04:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mpqtyo/rocm_vs_vulkan_for_amd_gpu_rx7800xt/ | Grouchy-Drag-2281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpqtyo | false | null | t3_1mpqtyo | /r/LocalLLaMA/comments/1mpqtyo/rocm_vs_vulkan_for_amd_gpu_rx7800xt/ | false | false | self | 1 | null |
Pruned GPT-OSS 6.0B kinda works | 62 | 2025-08-14T04:16:29 | https://huggingface.co/AmanPriyanshu/gpt-oss-6.0b-specialized-all-pruned-moe-only-7-experts | Quiet-Engineer110 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mpqew3 | false | null | t3_1mpqew3 | /r/LocalLLaMA/comments/1mpqew3/pruned_gptoss_60b_kinda_works/ | false | false | 62 | {'enabled': False, 'images': [{'id': 'aaoKLInTgXWvAC3h_YKai0S41TEi4sEQ5dlZR6riJuY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aaoKLInTgXWvAC3h_YKai0S41TEi4sEQ5dlZR6riJuY.png?width=108&crop=smart&auto=webp&s=70967b0a200469ed0bf4c0eceb61ca19684540b9', 'width': 108}, {'height': 116, 'url': 'h... | ||
[PSA] Scaling to TP=8 gives 400% MBU (470℅ faster tg) on midrange gpu's (2080ti 616.0 GB/s -> 2660.0 GB/s) | 0 |
@2:27 The video makes an important point, with the right amount of compute power, memory bandwidth, interconnection bandwidth, and software, you have 400% faster single-stream token generation compared against your one gpu's ideal max bandwidth utilization. **Consider this when weighing a purchase of a number identic... | 2025-08-14T04:16:07 | https://v.redd.it/yq8fv0yerwif1 | Aaaaaaaaaeeeee | /r/LocalLLaMA/comments/1mpqemy/psa_scaling_to_tp8_gives_400_mbu_470_faster_tg_on/ | 1970-01-01T00:00:00 | 0 | {} | 1mpqemy | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/yq8fv0yerwif1/DASHPlaylist.mpd?a=1757866575%2CMzRhMmI3YTJkYzFmODBiMWI1YTkxM2QzNzhjNzY5NTg4NzNhNjlhNTMyMzVmOTNkNjJiNWU1ZTcwZDA3MGQ4OQ%3D%3D&v=1&f=sd', 'duration': 237, 'fallback_url': 'https://v.redd.it/yq8fv0yerwif1/DASH_720.mp4?source=fallback', 'h... | t3_1mpqemy | /r/LocalLLaMA/comments/1mpqemy/psa_scaling_to_tp8_gives_400_mbu_470_faster_tg_on/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'c2lncnQ4eGVyd2lmMa3_YFq0fsUcLJjW-cXATziVsktKQ4ou0HgNQmcEURku', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/c2lncnQ4eGVyd2lmMa3_YFq0fsUcLJjW-cXATziVsktKQ4ou0HgNQmcEURku.png?width=108&crop=smart&format=pjpg&auto=webp&s=a4cd9627d73785ad4f41b2fbc5f61c097728f... | |
Did the community move past the AI waifu phase, I remember it used to be the done-to-death joke about this sub | 0 | So maybe what's the latest waifu model | 2025-08-14T04:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mpq76f/did_the_community_move_past_the_ai_waifu_phase_i/ | HOLUPREDICTIONS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpq76f | false | null | t3_1mpq76f | /r/LocalLLaMA/comments/1mpq76f/did_the_community_move_past_the_ai_waifu_phase_i/ | false | false | self | 0 | null |
YAMS: Yet Another Memory System for LLM's | 22 | ||
||
|Built this for my LLM workflows - needed searchable, persistent memory that wouldn't blow up storage costs. I also wanted to use it locally for my research. It's a content-addressed storage system with block-level deduplication (saves 30-40% on typical codebases). I have integrated the CLI tool into most of my w... | 2025-08-14T03:41:54 | https://github.com/trvon/yams | blkmanta | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mppqtu | false | null | t3_1mppqtu | /r/LocalLLaMA/comments/1mppqtu/yams_yet_another_memory_system_for_llms/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'Wdl3okAHvOXaOkYdjPCaNhixUPRpyzUYZxAYiri9Ewg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wdl3okAHvOXaOkYdjPCaNhixUPRpyzUYZxAYiri9Ewg.png?width=108&crop=smart&auto=webp&s=40de32605a925da581adbe017c3e3c31227b32c0', 'width': 108}, {'height': 108, 'url': 'h... |
what’s the latest Breakthrough in image-to-video AI generation for creators? | 0 | I want to turn my static images into dynamic, animated videos with precise style control. what new features are revolutionizing AI image-to-video creation in 2025? how can enhanced customization options like motion intensity and aspect ratio adjustment help make my videos stand out?
are there quick and easy tools tha... | 2025-08-14T03:34:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mpplut/whats_the_latest_breakthrough_in_imagetovideo_ai/ | Neat_Chapter_9055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpplut | false | null | t3_1mpplut | /r/LocalLLaMA/comments/1mpplut/whats_the_latest_breakthrough_in_imagetovideo_ai/ | false | false | self | 0 | null |
Indexing LLM | 0 | What are the best models to Index the project folder, from small to bigger? | 2025-08-14T03:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mpp8nu/indexing_llm/ | tarsonis125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpp8nu | false | null | t3_1mpp8nu | /r/LocalLLaMA/comments/1mpp8nu/indexing_llm/ | false | false | self | 0 | null |
Any no-code method to use local LLM to compare a spec sheet (word doc) to an IEEE standard and output the file with the standard embedded? | 0 | I am a non-programmer dabbling into the world of LLM and wondering if what I wanted to do is possible. Basically I have a bunch of spec sheets that I want to compare to various IEEE standards and use AI to rewrite the spec sheet so it aligns with the standard. How can I go about doing this? | 2025-08-14T03:07:58 | traderjay_toronto | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpp2rp | false | null | t3_1mpp2rp | /r/LocalLLaMA/comments/1mpp2rp/any_nocode_method_to_use_local_llm_to_compare_a/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'l36xum2miwif1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/l36xum2miwif1.png?width=108&crop=smart&auto=webp&s=15b277d8722a9c9bc6af0367e3cf910c62c9585e', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/l36xum2miwif1.png?width=216&crop=smart&auto=web... | |
Best Local Model for Flowchart extraction? | 1 | Hi ,
Looking at what local Ilm people would suggest to extract information from a pdf or image about a flow chart. The idea would be to translate the flow chart into mermaid diagram or something that could then be used to translate into code | 2025-08-14T02:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mpotr8/best_local_model_for_flowchart_extraction/ | Illustrious-Salad-25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpotr8 | false | null | t3_1mpotr8 | /r/LocalLLaMA/comments/1mpotr8/best_local_model_for_flowchart_extraction/ | false | false | self | 1 | null |
Old server repurposing for LLM code inference | 3 | i have a old server that i would like to repurpose as a local llm server for coding and math
\* i9 9900K
\* z390 Designare ( pcie 3 with x16 single slot or x8 dual slot )
\* 64GB (16x4) 3200MHz DDR4 RAM
\* (2 x 1TB) NVMe and (2 x 1TB) SSDs
\* 5700XT Graphics card 8GB
\* 750W PSU
what options have I got ... | 2025-08-14T02:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mpoof6/old_server_repurposing_for_llm_code_inference/ | putrasherni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpoof6 | false | null | t3_1mpoof6 | /r/LocalLLaMA/comments/1mpoof6/old_server_repurposing_for_llm_code_inference/ | false | false | self | 3 | null |
It looks like Facebook is clever than my ex. | 0 | She was always wanting to close discussions. | 2025-08-14T02:44:00 | jboulhous | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpokv7 | false | null | t3_1mpokv7 | /r/LocalLLaMA/comments/1mpokv7/it_looks_like_facebook_is_clever_than_my_ex/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fb0igx4oewif1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/fb0igx4oewif1.jpeg?width=108&crop=smart&auto=webp&s=85d1e7209b118275a6a634ea0a52d280896d9d69', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/fb0igx4oewif1.jpeg?width=216&crop=smart&auto=... | |
Test, Compare and Aggregate LLMs | 1 | https://reddit.com/link/1mpofvy/video/qzcqgumddwif1/player
Hey everyone! 👋
Excited to share my first side project - a simple but useful model aggregator web app!
**What it does:**
* Select multiple AI models you want to test
* Send the same prompt to all models OR use different prompts for each
* Compare responses... | 2025-08-14T02:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mpofvy/test_compare_and_aggregate_llms/ | hashdrone3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpofvy | false | null | t3_1mpofvy | /r/LocalLLaMA/comments/1mpofvy/test_compare_and_aggregate_llms/ | false | false | self | 1 | null |
LM Studio and AMD AI Max 395 | 2 | Got a new computer. Been trying to get it to work well, and I've been struggling. At this point, I think it may be down to software though.
Using LM Studio with Vulkan runtime, I can get larger models to load and play with them, but I can't set the context much larger then 10k tokens without getting: `Failed to initi... | 2025-08-14T02:28:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mpo9df/lm_studio_and_amd_ai_max_395/ | cfogrady | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpo9df | false | null | t3_1mpo9df | /r/LocalLLaMA/comments/1mpo9df/lm_studio_and_amd_ai_max_395/ | false | false | self | 2 | null |
Recommended Starting setup | 0 | What is the recommended starting setup for a local LLMA?
I don't want to break the bank but i want to better understand AI and for me the best way to do that is to build out a system, I have a Apple Mini M2 which I'm upgrading to the M4 which I heard will work, however if someone has a better starting point on a build... | 2025-08-14T02:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mpo8o1/recommended_starting_setup/ | ProfessionalFun9542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpo8o1 | false | null | t3_1mpo8o1 | /r/LocalLLaMA/comments/1mpo8o1/recommended_starting_setup/ | false | false | self | 0 | null |
Benchmark of dense NVFP4 LLMs on 5090? [VLLM] | 1 | Has anyone got any single-user speeds for a NVFP4 quantized model? Here's a PR for that feature in VLLM: https://github.com/vllm-project/vllm/pull/21309
This one would be a good size to test: https://huggingface.co/RedHatAI/Qwen3-32B-NVFP4
- What's the token generation speed of a single batch/stream/sequence?
When ... | 2025-08-14T02:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mpnvyy/benchmark_of_dense_nvfp4_llms_on_5090_vllm/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpnvyy | false | null | t3_1mpnvyy | /r/LocalLLaMA/comments/1mpnvyy/benchmark_of_dense_nvfp4_llms_on_5090_vllm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'G9wTm0uGnsx4l6lCY5rTZZGZMxv6aXMkfDiuz0cyrBE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G9wTm0uGnsx4l6lCY5rTZZGZMxv6aXMkfDiuz0cyrBE.png?width=108&crop=smart&auto=webp&s=c27f44b6c51d9626493cb0638e0a3cb1182d3af1', 'width': 108}, {'height': 108, 'url': 'h... |
Mini PC - Framework or HP Z2 Mini G1a | 1 | For local LLM use, if you were constrained to a mini PC (in a 10” rack), but budget was flexible, would you choose the Framework or HP Z2 Mini G1a? Any other options worth considering?
| 2025-08-14T02:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mpntuj/mini_pc_framework_or_hp_z2_mini_g1a/ | iamwillbar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpntuj | false | null | t3_1mpntuj | /r/LocalLLaMA/comments/1mpntuj/mini_pc_framework_or_hp_z2_mini_g1a/ | false | false | self | 1 | null |
Too many people seem to forget that. | 372 | 2025-08-14T01:50:49 | Final_Wheel_7486 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpng2r | false | null | t3_1mpng2r | /r/LocalLLaMA/comments/1mpng2r/too_many_people_seem_to_forget_that/ | false | false | default | 372 | {'enabled': True, 'images': [{'id': 'j3xmip845wif1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/j3xmip845wif1.jpeg?width=108&crop=smart&auto=webp&s=16b17ae27963e2ed223f4a494a113361c9499f54', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/j3xmip845wif1.jpeg?width=216&crop=smart&auto=... | ||
I'm building Sagitta, a GPU powered semantic code search and knowledge graph MCP tool that can be added to almost any AI coding agent. | 1 | All the info is in the README.md at [Sagitta](https://gitlab.com/adammulvany/sagitta)
Sharing here because the results are best with a high end GPU, and I know most people here probably have one.
I'm using Qwen 3 embed 4b even with large repositories (after initial sync incremental syncs are fast, albeit I have an ... | 2025-08-14T01:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mpnegl/im_building_sagitta_a_gpu_powered_semantic_code/ | Business_Fold_8686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpnegl | false | null | t3_1mpnegl | /r/LocalLLaMA/comments/1mpnegl/im_building_sagitta_a_gpu_powered_semantic_code/ | false | false | self | 1 | null |
Will the new 5070 Ti Super 24GB be local LLM new favourite 🙂 | 16 | I just saw this updated leak on prices so Im wondering...
Will 5070 ti Super 24GB be local LLM new favourite?
https://overclock3d.net/news/gpu-displays/nvidia-geforce-rtx-50-super-pricing-leaks/
Looks on par with 3090 for similar price used/new but newest tech will offer considerably more performance and future proofi... | 2025-08-14T01:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mpmq7s/will_the_new_5070_ti_super_24gb_be_local_llm_new/ | Redinaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpmq7s | false | null | t3_1mpmq7s | /r/LocalLLaMA/comments/1mpmq7s/will_the_new_5070_ti_super_24gb_be_local_llm_new/ | false | false | self | 16 | null |
I built OpenRubricRL - Convert human rubrics into LLM reward functions for RLHF (open source) | 1 | [removed] | 2025-08-14T01:17:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mpmq5i/i_built_openrubricrl_convert_human_rubrics_into/ | Gullible_Pudding_651 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpmq5i | false | null | t3_1mpmq5i | /r/LocalLLaMA/comments/1mpmq5i/i_built_openrubricrl_convert_human_rubrics_into/ | false | false | self | 1 | null |
Docling: Great quality, but painfully slow | 6 | I've been using Docling for a while now, and I have to say the output quality is good. It really nails accuracy and formatting in a way that most other tools don't.
The problem? Speed. It's painfully slow. Even relatively small documents take far longer than I’d expect, and without a GPU, large ones can take over an h... | 2025-08-14T01:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mpmhxj/docling_great_quality_but_painfully_slow/ | Cyp9715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpmhxj | false | null | t3_1mpmhxj | /r/LocalLLaMA/comments/1mpmhxj/docling_great_quality_but_painfully_slow/ | false | false | self | 6 | null |
Jan-v1 trial results follow-up and comparison to Qwen3, Perplexity, Claude | 45 | Following up to [this post](https://www.reddit.com/r/LocalLLaMA/comments/1mov3d9/i_tried_the_janv1_model_released_today_and_here/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) yesterday, here are the updated results using Q8 of the Jan V1 model with Serper search.
Summarie... | 2025-08-14T01:03:04 | https://www.reddit.com/gallery/1mpmeba | rm-rf-rm | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mpmeba | false | null | t3_1mpmeba | /r/LocalLLaMA/comments/1mpmeba/janv1_trial_results_followup_and_comparison_to/ | false | false | 45 | null | |
ERNIE 4.5 21BA3B appreciation post. | 63 | I think it's the best model of it's size, outshining gpt-oss 20 and qwen 3 30BA3B.
It's not as good at coding, but it runs without error even at decent context. I find the qwen a3b to be better for code gen, but prefer ernie for everythign else. | 2025-08-14T00:55:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mpm8kr/ernie_45_21ba3b_appreciation_post/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpm8kr | false | null | t3_1mpm8kr | /r/LocalLLaMA/comments/1mpm8kr/ernie_45_21ba3b_appreciation_post/ | false | false | self | 63 | null |
AMD Radeon RX 480 8GB benchmark | 12 | I finally got around to testing my[ RX 480 8GB](https://www.techpowerup.com/gpu-specs/radeon-rx-480.c2848) card with [latest](https://github.com/ggml-org/llama.cpp/releases/tag/b6152) llama.cpp [Vulkan](https://github.com/ggml-org/llama.cpp/releases/download/b6152/llama-b6152-bin-ubuntu-vulkan-x64.zip) on Kubuntu. Jus... | 2025-08-14T00:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mpm728/amd_radeon_rx_480_8gb_benchmark/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpm728 | false | null | t3_1mpm728 | /r/LocalLLaMA/comments/1mpm728/amd_radeon_rx_480_8gb_benchmark/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'BALwQJYxkz4hh4_6nSlXEUfkKkPqgBECsjJp-rYfxaA', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/BALwQJYxkz4hh4_6nSlXEUfkKkPqgBECsjJp-rYfxaA.jpeg?width=108&crop=smart&auto=webp&s=17d6438cc97820e4645ff10a3403bf076213cfb6', 'width': 108}, {'height': 106, 'url': '... |
What people do with small local models? | 22 | I'm seriously looking for ways to use small models, like Qwen3-4B-Thinking-2507, on my daily job while running on my laptop. Anyone uses that level small models for daily tasks and with what type of tasks do you use them?
It is so exciting for me to see that we have better models than gpt-3.5 within 2 years. I'm hones... | 2025-08-14T00:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mplqlx/what_people_do_with_small_local_models/ | piizeus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mplqlx | false | null | t3_1mplqlx | /r/LocalLLaMA/comments/1mplqlx/what_people_do_with_small_local_models/ | false | false | self | 22 | null |
How can you tell which model you’re using | 0 | OpenAI finally caved in and brought back some legacy models for subscribers. But the new ”legacy” 4o is acting a bit different from the old one.
Remember how 4o used to get this wrong:
”Which is bigger, 9.11 or 9.9?”
[It used to answer 9.11 is bigger](https://medium.com/@H2bm/decoding-chatgpt-9-11-is-greater-than-9-... | 2025-08-14T00:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mplh5b/how_can_you_tell_which_model_youre_using/ | marrow_monkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mplh5b | false | null | t3_1mplh5b | /r/LocalLLaMA/comments/1mplh5b/how_can_you_tell_which_model_youre_using/ | false | false | self | 0 | null |
Watching China and USA release new models when you know you got something très magnifique ready to drop soon. | 91 | 2025-08-14T00:13:53 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mplas7 | false | null | t3_1mplas7 | /r/LocalLLaMA/comments/1mplas7/watching_china_and_usa_release_new_models_when/ | false | false | default | 91 | {'enabled': True, 'images': [{'id': '02m35s2wnvif1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/02m35s2wnvif1.jpeg?width=108&crop=smart&auto=webp&s=5795ff656e87e59ef559c960b50b30dd154b8283', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/02m35s2wnvif1.jpeg?width=216&crop=smart&auto=... | ||
Thumb drive LLM Trouble shoot | 1 | I’m watching this video by Global Science Network about running ollama, dolphin-llama3 from a thumdrive with anythingllm for the ui.
Video:
https://youtu.be/eiMSapoeyaU?si=SPLCiIxJoC3X3cuV
I can download and run the program I can even run it when I move it to the thumb drive however I can’t start the server once I tra... | 2025-08-13T23:54:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mpkuyz/thumb_drive_llm_trouble_shoot/ | SeaSetPoet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpkuyz | false | null | t3_1mpkuyz | /r/LocalLLaMA/comments/1mpkuyz/thumb_drive_llm_trouble_shoot/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8.jpeg?width=108&crop=smart&auto=webp&s=f871f4e46fc6db9a94ba21d79667d574b66d3d7c', 'width': 108}, {'height': 162, 'url': '... |
Built my own offline AI assistant for Windows — no OpenAI key, runs local, works great even without internet! | 0 | Hey folks,
Been working on this side project to create a personal AI chatbot that runs fully local on Windows. No subscription, no API key, just plug & play.
It uses a quantized LLM, has a GUI, works offline, and I even got it chatting with documents and responding with voice. Fully offloaded onto my GPU for blazin... | 2025-08-13T23:43:27 | https://www.reddit.com/gallery/1mpkle2 | Fair_Indication7324 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mpkle2 | false | null | t3_1mpkle2 | /r/LocalLLaMA/comments/1mpkle2/built_my_own_offline_ai_assistant_for_windows_no/ | false | false | 0 | null | |
2 cards, 1 quant | 0 | >"PCIE speeds don't really matter for inference."
C:\LCP>nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA GeForce RTX 3090 Ti
C:\LCP>set CUDA_VISIBLE_DEVICES=0
C:\LCP>llama-bench.exe -m zai-org_GLM-4.5-Air-Q5_K_M-00001-of-00003.gguf -ot exps=CPU --flash-attn 1 --threads 12
... | 2025-08-13T23:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mpkf8n/2_cards_1_quant/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpkf8n | false | null | t3_1mpkf8n | /r/LocalLLaMA/comments/1mpkf8n/2_cards_1_quant/ | false | false | self | 0 | null |
Can I use a 3090 and a 1080 in the same PC for more VRAM? If so, will it likely be slow? | 1 | Also, is it as simple as putting the 1080 in a free pcie slot? Are there weird issues I have to worry about? Thanks. | 2025-08-13T23:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mpk8fh/can_i_use_a_3090_and_a_1080_in_the_same_pc_for/ | Donovanth1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpk8fh | false | null | t3_1mpk8fh | /r/LocalLLaMA/comments/1mpk8fh/can_i_use_a_3090_and_a_1080_in_the_same_pc_for/ | false | false | self | 1 | null |
Testing qwen3-30b-a3b-q8_0 with my RTX Pro 6000 Blackwell MaxQ. Significant speed improvement. Around 120 t/s. | 67 | 2025-08-13T23:27:12 | https://v.redd.it/ycrl1u6pevif1 | swagonflyyyy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpk834 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ycrl1u6pevif1/DASHPlaylist.mpd?a=1757719646%2COWRiZjg1NjM4MTdhYjUyNjI1YTlmYWNmNzZhOGJkMGU4MjljYWRlZmM4NWM5MDFjZGE1NjM4YmQzZDY1Mjg3Mg%3D%3D&v=1&f=sd', 'duration': 167, 'fallback_url': 'https://v.redd.it/ycrl1u6pevif1/DASH_720.mp4?source=fallback', 'h... | t3_1mpk834 | /r/LocalLLaMA/comments/1mpk834/testing_qwen330ba3bq8_0_with_my_rtx_pro_6000/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'dTd1ZDB2NnBldmlmMavIpB9AHSfqY-PSwwptUZpoVAt7ZMaVO0xQLghP-sG0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dTd1ZDB2NnBldmlmMavIpB9AHSfqY-PSwwptUZpoVAt7ZMaVO0xQLghP-sG0.png?width=108&crop=smart&format=pjpg&auto=webp&s=477110773f117f81fd3e80aef1ed20559ce25... | ||
Announcing LocalLlama discord server & bot! | 36 | INVITE: https://discord.gg/rC922KfEwj
There used to be one old discord server for the subreddit but it was deleted by the previous mod.
Why?
The subreddit has grown to 500k users - inevitably, some users like a niche community with more technical discussion and fewer memes (even if relevant).
We have a discord bot t... | 2025-08-13T23:21:05 | https://www.reddit.com/gallery/1mpk2va | HOLUPREDICTIONS | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mpk2va | false | null | t3_1mpk2va | /r/LocalLLaMA/comments/1mpk2va/announcing_localllama_discord_server_bot/ | false | true | 36 | null | |
We made it - 4090 X 64gb vram GPT OSS 120b 18t/s achievement | 0 | Good day dear friends. You might already know me from movies:
1- how the hell do I run this?
2- How do I achieve such speed
3- Wait what you did it?
and etc.
So I am here to present you hassle free, one line argument to run and get yourself GPT OSS 120b up and running with amazing 18 t/s on particular hardware... | 2025-08-13T22:57:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mpji6r/we_made_it_4090_x_64gb_vram_gpt_oss_120b_18ts/ | theundertakeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpji6r | false | null | t3_1mpji6r | /r/LocalLLaMA/comments/1mpji6r/we_made_it_4090_x_64gb_vram_gpt_oss_120b_18ts/ | false | false | self | 0 | null |
ipex-llm performance issues on Linux compared to Windows | 1 | [removed] | 2025-08-13T22:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mpitdd/ipexllm_performance_issues_on_linux_compared_to/ | According-Dog6444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpitdd | false | null | t3_1mpitdd | /r/LocalLLaMA/comments/1mpitdd/ipexllm_performance_issues_on_linux_compared_to/ | false | false | self | 1 | null |
How do I know if my LORA is working or not? | 3 | I recently created my first lora for qwen3-30b but it doesn't seem to be working and I can't tell whether the problem is the LORA itself or the script I'm running.
When I did a full finetune of qwen 0.6b instead of a LORA it worked, but I can't do a full finetune of qwen3-30b because that's going to be way too resour... | 2025-08-13T22:22:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mpimcf/how_do_i_know_if_my_lora_is_working_or_not/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpimcf | false | null | t3_1mpimcf | /r/LocalLLaMA/comments/1mpimcf/how_do_i_know_if_my_lora_is_working_or_not/ | false | false | self | 3 | null |
Connecting chatgpt to linkedin | 0 | Has anyone been able to build recruitment workflow connecting pitchbook and linkedin recruiter to Chatgpt such that first you find relevant companies from pitchbook and then source profiles from those companies through linkedin in single prompt on chatgpt? | 2025-08-13T21:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mphl43/connecting_chatgpt_to_linkedin/ | No-Brother-2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mphl43 | false | null | t3_1mphl43 | /r/LocalLLaMA/comments/1mphl43/connecting_chatgpt_to_linkedin/ | false | false | self | 0 | null |
Any local service or proxy that can emulate Ollama specific endpoints for OpenAI compatible servers? | 4 | Unfortunately, for some reason that I don't understand, a lot of OSS authors are hard coding their tools to use Ollama where, most of the tools that are made with Local LLM in mind support Ollama natively using Ollama specific endpoints instead of OpenAI compatible endpoints.
For example: google's langextract, instead... | 2025-08-13T21:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mpgo4x/any_local_service_or_proxy_that_can_emulate/ | datanxiete | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpgo4x | false | null | t3_1mpgo4x | /r/LocalLLaMA/comments/1mpgo4x/any_local_service_or_proxy_that_can_emulate/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Q_FrWyUGEA_hAhB0OU5XjGlJEYvlcCvdUClrXauqYzI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q_FrWyUGEA_hAhB0OU5XjGlJEYvlcCvdUClrXauqYzI.png?width=108&crop=smart&auto=webp&s=7aaf3fd898fd0294e70c3962c9128ff863ddafbd', 'width': 108}, {'height': 108, 'url': 'h... |
Why it’s a mistake to ask chatbots about their mistakes | 32 | _The tendency to ask AI bots to explain themselves reveals widespread misconceptions about how they work._
https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/ | 2025-08-13T21:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mpgkap/why_its_a_mistake_to_ask_chatbots_about_their/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpgkap | false | null | t3_1mpgkap | /r/LocalLLaMA/comments/1mpgkap/why_its_a_mistake_to_ask_chatbots_about_their/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'o1IC3AP_tRLHmQGGrKLEuWwOFg8so5y1eZFh2DBaoDE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/o1IC3AP_tRLHmQGGrKLEuWwOFg8so5y1eZFh2DBaoDE.jpeg?width=108&crop=smart&auto=webp&s=e9d29b9753e6c3851f9cd62d9967f939b3d4a209', 'width': 108}, {'height': 121, 'url': '... |
How can the new qwen3:4b have 256k token context window? | 0 | Although the parameters of the model don't limit the size of the model so in theory a 4b model can have 256k context window, but how can such a small model reason over so much data all at once? Won't it affect it's performance and if so what was the point of expanding it? | 2025-08-13T21:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mpgjvy/how_can_the_new_qwen34b_have_256k_token_context/ | ILoveMy2Balls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpgjvy | false | null | t3_1mpgjvy | /r/LocalLLaMA/comments/1mpgjvy/how_can_the_new_qwen34b_have_256k_token_context/ | false | false | self | 0 | null |
Lessons learned while building GPT-OSS from scratch | 8 | 2025-08-13T21:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mpgj7u/lessons_learned_while_building_gptoss_from_scratch/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpgj7u | false | null | t3_1mpgj7u | /r/LocalLLaMA/comments/1mpgj7u/lessons_learned_while_building_gptoss_from_scratch/ | false | false | 8 | {'enabled': False, 'images': [{'id': '4KHkrz-_LafEQvK_IohMA4AH-6EOhH3JvMMyq-dX1uM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4KHkrz-_LafEQvK_IohMA4AH-6EOhH3JvMMyq-dX1uM.png?width=108&crop=smart&auto=webp&s=6039c38827b0a1870bf6c0f1bc38acc95ed05a11', 'width': 108}, {'height': 108, 'url': 'h... | ||
Coding Agent | 0 | I have just got to the point where I am using ‘Continue’ to replace VS Code Copilot as my chat, edit and agent LLM. I think I get acceptable performance with agent using qwen3 and qwen3 coder, but I want to make sure these will have all the tools out of the box if I’m using Ollama to spin up the localhost server? Do I ... | 2025-08-13T20:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mpgduu/coding_agent/ | MadRelaxationYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpgduu | false | null | t3_1mpgduu | /r/LocalLLaMA/comments/1mpgduu/coding_agent/ | false | false | self | 0 | null |
Prose-writing/story-telling on par with o1/o1-pro? | 4 | Hi!
I didn't put it in the title, but… the kind of prose I enjoy is NSFW. So there's also the issue of refusal, but … honestly that feels almost secondary at this stage.
Knowing the Internet, some of you are going to stop reading right here, others are going to actually exert effort to either ask me why I'd want to ... | 2025-08-13T20:54:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mpgchj/prosewritingstorytelling_on_par_with_o1o1pro/ | Sk0nge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpgchj | false | null | t3_1mpgchj | /r/LocalLLaMA/comments/1mpgchj/prosewritingstorytelling_on_par_with_o1o1pro/ | false | false | self | 4 | null |
Overcoming VRAM limitations | 0 | I got an rtx 3090 24gb and I want to increase it to a 48gb variant for running and testing some models ( some chess, imaging\video generative models) as well as some llm's for home\work assistants.
Tried looking for some tutorials or any posts about making a 3090 with 48gb online, but couldn't quite find a guide on ho... | 2025-08-13T20:44:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mpg3r4/overcoming_vram_limitations/ | Big_black_click | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpg3r4 | false | null | t3_1mpg3r4 | /r/LocalLLaMA/comments/1mpg3r4/overcoming_vram_limitations/ | false | false | self | 0 | null |
Micdrop, an open source lib to bring AI voice conversation to the web | 6 | I developed [micdrop.dev](http://micdrop.dev), first to experiment, then to launch two voice AI products (a SaaS and a recruiting booth) over the past 18 months.
It's "just a wrapper," so I wanted it to be open source.
The library handles all the complexity on the browser and server sides, and provides integrations f... | 2025-08-13T20:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mpfw59/micdrop_an_open_source_lib_to_bring_ai_voice/ | GodefroyDC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpfw59 | false | null | t3_1mpfw59 | /r/LocalLLaMA/comments/1mpfw59/micdrop_an_open_source_lib_to_bring_ai_voice/ | false | false | self | 6 | null |
What is the best inexpensive NSFW Vision model on the market right now? | 1 | [removed] | 2025-08-13T20:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mpfrv5/what_is_the_best_inexpensive_nsfw_vision_model_on/ | Which_Reputation_345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpfrv5 | false | null | t3_1mpfrv5 | /r/LocalLLaMA/comments/1mpfrv5/what_is_the_best_inexpensive_nsfw_vision_model_on/ | false | false | nsfw | 1 | null |
Why are mlx versions larger in size? | 0 | I see options to dowload gguf vs mlx models in LM Studio. I am not sure why MLX versions are almost always double the size of their GGUF counterparts. | 2025-08-13T20:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mpfkdg/why_are_mlx_versions_larger_in_size/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpfkdg | false | null | t3_1mpfkdg | /r/LocalLLaMA/comments/1mpfkdg/why_are_mlx_versions_larger_in_size/ | false | false | self | 0 | null |
Best MoE under 16b total params? | 1 | I want to make custom exectuables that would run on typical business laptops, relying on local llm for some tasks in the background.
Most have just an integrated gpu and 16gb ram, but quite capable cpus.
I was wondering if there were some MoEs out there that I could use and have a smaller compute footprint than qwe... | 2025-08-13T20:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mpfg3z/best_moe_under_16b_total_params/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpfg3z | false | null | t3_1mpfg3z | /r/LocalLLaMA/comments/1mpfg3z/best_moe_under_16b_total_params/ | false | false | self | 1 | null |
Intriguing Properties of gpt-oss Jailbreaks | 1 | 2025-08-13T20:07:02 | https://www.lesswrong.com/posts/XvpEsjKwQFcWoD89g/intriguing-properties-of-gpt-oss-jailbreaks | emptyplate | lesswrong.com | 1970-01-01T00:00:00 | 0 | {} | 1mpf3gv | false | null | t3_1mpf3gv | /r/LocalLLaMA/comments/1mpf3gv/intriguing_properties_of_gptoss_jailbreaks/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'yUHJZKYQFiqc_4C-n27kU-v2ny4Gs5XiuCVrpgGHxUw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yUHJZKYQFiqc_4C-n27kU-v2ny4Gs5XiuCVrpgGHxUw.png?width=108&crop=smart&auto=webp&s=4c22ac4767252938afd0843ae282428d0ca329c8', 'width': 108}, {'height': 113, 'url': 'h... | |
Added locally generated dialogue + voice acting to my game! | 171 | 2025-08-13T20:02:24 | https://v.redd.it/t1qgim34euif1 | LandoRingel | /r/LocalLLaMA/comments/1mpez1p/added_locally_generated_dialogue_voice_acting_to/ | 1970-01-01T00:00:00 | 0 | {} | 1mpez1p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t1qgim34euif1/DASHPlaylist.mpd?a=1757836952%2CMGZjNzNmNjlmMDEzMzNhMDhkN2RkMzZmZDQ5MWI3MTc0ZmZmYjhmZDZkZDU5M2RiMDNmMjgyNzgxYjRiYmM2YQ%3D%3D&v=1&f=sd', 'duration': 91, 'fallback_url': 'https://v.redd.it/t1qgim34euif1/DASH_1080.mp4?source=fallback', 'h... | t3_1mpez1p | /r/LocalLLaMA/comments/1mpez1p/added_locally_generated_dialogue_voice_acting_to/ | false | false | 171 | {'enabled': False, 'images': [{'id': 'N2szZGNtMzRldWlmMZmQp7O5BpjYg7UqegAgE9IdgP7TYx8Szh9dJVqIheQu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N2szZGNtMzRldWlmMZmQp7O5BpjYg7UqegAgE9IdgP7TYx8Szh9dJVqIheQu.png?width=108&crop=smart&format=pjpg&auto=webp&s=fe778f450918fa7bb1e7c6252d7a6e7153b22... | ||
Agent Has No Secret | 0 | 2025-08-13T19:58:14 | https://psiace.me/posts/agent-has-no-secret/ | PsiACE | psiace.me | 1970-01-01T00:00:00 | 0 | {} | 1mpeuyp | false | null | t3_1mpeuyp | /r/LocalLLaMA/comments/1mpeuyp/agent_has_no_secret/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'fOD2fgB8VO9AXNbUa_CTTjVP4C4Cp1fiWTLAAtjDTXs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fOD2fgB8VO9AXNbUa_CTTjVP4C4Cp1fiWTLAAtjDTXs.png?width=108&crop=smart&auto=webp&s=99656c83bdfd3b40a907977435316cffcc851c0b', 'width': 108}, {'height': 113, 'url': 'h... | |
Best >8b models for conversation only | 1 | Hi, I have only 8GB of VRAM and I'm mostly interested only on conversation and narratives, no coding.
I would like to know your suggestions for a 8b model or less (Even 1b models).
Thanks!! | 2025-08-13T19:57:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mpetva/best_8b_models_for_conversation_only/ | haterloco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpetva | false | null | t3_1mpetva | /r/LocalLLaMA/comments/1mpetva/best_8b_models_for_conversation_only/ | false | false | self | 1 | null |
Memory Bandwith of GPUs, the only defining factor for speed? | 4 | Was browsing ebay and saw the AI Pro 9700 is actually available for $1400 and was tempted.
However, memory bandwith of this card is at around 680GB/s, not far from the 616GB/s of the RTX 2080 Ti, a seven year old card.
My knowledge is limited, as to how speed (token generation and prompt processing) is determined and... | 2025-08-13T19:40:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mpee3y/memory_bandwith_of_gpus_the_only_defining_factor/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpee3y | false | null | t3_1mpee3y | /r/LocalLLaMA/comments/1mpee3y/memory_bandwith_of_gpus_the_only_defining_factor/ | false | false | self | 4 | null |
GPT OSS 120b 34th on Simple bench, roughly on par with Llama 3.3 70b | 51 | 2025-08-13T19:40:46 | https://simple-bench.com/ | and_human | simple-bench.com | 1970-01-01T00:00:00 | 0 | {} | 1mpee0x | false | null | t3_1mpee0x | /r/LocalLLaMA/comments/1mpee0x/gpt_oss_120b_34th_on_simple_bench_roughly_on_par/ | false | false | default | 51 | null | |
GLM 4.5 Air, local setup issues, vllm and llama.cpp | 15 | I seem to be not quite able to match GLM 4.5 Air model output between what's running on [chat.z.ai/bigmodel.cn](http://chat.z.ai/bigmodel.cn) and my local 4x RTX3090 vllm/llama.cpp setup. I tried cpatonn/GLM-4.5-Air-AWQ-4bit, QuantTrio/GLM-4.5-Air-AWQ-FP16Mix, unsloth/GLM-4.5-Air-GGUF (q4\_k\_m, ud-q4\_k\_xl, ud-q5\_k\... | 2025-08-13T19:36:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mpe9p2/glm_45_air_local_setup_issues_vllm_and_llamacpp/ | bfroemel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpe9p2 | false | null | t3_1mpe9p2 | /r/LocalLLaMA/comments/1mpe9p2/glm_45_air_local_setup_issues_vllm_and_llamacpp/ | false | false | self | 15 | null |
Looking for all 1M coders I found only 3 | 0 | So guys I am currently searching/researching for a good coder locally that is trained for 1M in CTX. For the first time that was needed to go over 100k tokens (\~ 10000 code lines) it was a real headache.
The first day using GPT5 it was amazing but then as predicted the quality and service degraded drastically since ... | 2025-08-13T19:35:33 | Trilogix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpe8xq | false | null | t3_1mpe8xq | /r/LocalLLaMA/comments/1mpe8xq/looking_for_all_1m_coders_i_found_only_3/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'pb32g8la5uif1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/pb32g8la5uif1.jpeg?width=108&crop=smart&auto=webp&s=3a76debe2adb7bbdd221eda2514dd6bd1ae60c1d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/pb32g8la5uif1.jpeg?width=216&crop=smart&auto=w... | |
[DEV] Exploring a runtime-oriented LLM weight format with dynamic loading and built-in personalization – early prototype | 5 | I'm working on an experimental weight storage format for LLMs focused on **runtime-level access**, not static loading.
The goal is to move away from monolithic state\_dicts or GGUF-style dumps, and enable **models that dynamically stream only the weights they need**, during inference — unlocking support for **trillion... | 2025-08-13T19:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mpe24y/dev_exploring_a_runtimeoriented_llm_weight_format/ | vlad_meason | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpe24y | false | null | t3_1mpe24y | /r/LocalLLaMA/comments/1mpe24y/dev_exploring_a_runtimeoriented_llm_weight_format/ | false | false | self | 5 | null |
Which is the best small model to fine-tune? | 14 | Hey, I have started to learn more about fine-tuning, LLMs and Lora recently, I have already tried to fine-tune some models using unsloth + lora on google colab, mostly Llama 3 8B, and got some OK results, but I feel it could be a lot better.
I have a Nvidia 3060 12gb VRAM available, which is probably better than the c... | 2025-08-13T19:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mpdzgz/which_is_the_best_small_model_to_finetune/ | IllustriousSearch411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpdzgz | false | null | t3_1mpdzgz | /r/LocalLLaMA/comments/1mpdzgz/which_is_the_best_small_model_to_finetune/ | false | false | self | 14 | null |
How to run mlx-optimized models on Apple (gets best tok/sec) | 0 | 2025-08-13T19:22:14 | https://v.redd.it/7s4gk74m7uif1 | ai-christianson | /r/LocalLLaMA/comments/1mpdwcw/how_to_run_mlxoptimized_models_on_apple_gets_best/ | 1970-01-01T00:00:00 | 0 | {} | 1mpdwcw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7s4gk74m7uif1/DASHPlaylist.mpd?a=1757834540%2CMWJjMjJjNTRiMGRmYzBiMjFiOWE4MmM3ZmMwNTI4Y2M4YWVjMGFiODQ4MzVmNjc1Njk2YjhjZjQ5NTA3OWRlNA%3D%3D&v=1&f=sd', 'duration': 90, 'fallback_url': 'https://v.redd.it/7s4gk74m7uif1/DASH_1080.mp4?source=fallback', 'h... | t3_1mpdwcw | /r/LocalLLaMA/comments/1mpdwcw/how_to_run_mlxoptimized_models_on_apple_gets_best/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Y3dvOW43NG03dWlmMYlbI57rGFhOwF6K1rWxP1lylkwi6F_xCizjoUTnnge-', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Y3dvOW43NG03dWlmMYlbI57rGFhOwF6K1rWxP1lylkwi6F_xCizjoUTnnge-.png?width=108&crop=smart&format=pjpg&auto=webp&s=89059622cb5dfd042a09f606742adf0b3b14... | ||
Best Local LLM for coding rn | 2 | Can anybody guide me choosing a Local LLM for my web app development.
I have a web app development project of mine which needs to be developed based on information from pdf file. Its a kind of calculation that are available in pdf unorganised and i want ai to develop
I need large context window
My budget is I can hire ... | 2025-08-13T19:15:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mpdq7j/best_local_llm_for_coding_rn/ | CarpenterAny8822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpdq7j | false | null | t3_1mpdq7j | /r/LocalLLaMA/comments/1mpdq7j/best_local_llm_for_coding_rn/ | false | false | self | 2 | null |
Qwen cli coder diffs unreadable highlight colors | 0 | I’ve started using Qwen CLI for coding in my iterm2 terminal in MacOS (using free Qwen-coder in OpenRouter).
Seems decent.
Problem is the code diffs it shows when it makes changes are unreadable. The highlight color (so bright) is so bad and conflicts with black background and whiteish text colors).
Qwen has no idea... | 2025-08-13T19:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mpdq4c/qwen_cli_coder_diffs_unreadable_highlight_colors/ | Puzzleheaded-Fly4322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpdq4c | false | null | t3_1mpdq4c | /r/LocalLLaMA/comments/1mpdq4c/qwen_cli_coder_diffs_unreadable_highlight_colors/ | false | false | self | 0 | null |
GPT OSS 120B On 4090 x 64gb ram??? | 2 | So I am hearing now 2nd time that a person was able to run this model on 4090 with 64gb ram with 131k context at 22-30 t/s.. I am starting to think they simply lie to et a hype so I am here seeking help and information...
Have you achieved that? If yes share the details EXACTLY how you did it because I barely believe i... | 2025-08-13T19:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mpdkhe/gpt_oss_120b_on_4090_x_64gb_ram/ | theundertakeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpdkhe | false | null | t3_1mpdkhe | /r/LocalLLaMA/comments/1mpdkhe/gpt_oss_120b_on_4090_x_64gb_ram/ | false | false | self | 2 | null |
Case study: hybrid SSM + sparse-attention LM that holds up at 32k ctx (w/ sane throughput) | 17 | Would love to hear your thoughts on this in the comments. Anything I should play around w next?
TLDR: I swapped ~40% of self-attn layers for Mamba-style SSM blocks and added a block-shifted local attention pattern and a bunch of tiny global tokens. On a 1.3B LM trained ~300B tokens, I’m seeing ~1.35× more tokens/s an... | 2025-08-13T19:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mpdjx9/case_study_hybrid_ssm_sparseattention_lm_that/ | gpu_mamba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpdjx9 | false | null | t3_1mpdjx9 | /r/LocalLLaMA/comments/1mpdjx9/case_study_hybrid_ssm_sparseattention_lm_that/ | false | false | self | 17 | null |
more than 131k context on a single GPU - llama.cpp | 3 | This is probably an edge case most people don't care about at the moment, but read on if you have an expensive GPU habit and want to use long context models:
If you have a single GPU with a lot of VRAM like a blackwell 6000 pro AND you are trying to use a model that supports longer than 131k context length AND you... | 2025-08-13T19:02:05 | https://github.com/ggml-org/llama.cpp/pull/15298 | createthiscom | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mpdd6n | false | null | t3_1mpdd6n | /r/LocalLLaMA/comments/1mpdd6n/more_than_131k_context_on_a_single_gpu_llamacpp/ | false | false | default | 3 | null |
AI Character Creation Page with Greetings, Backstories & Prompt Recommendations | 1 | Hey r/LocalLLaMA!
I’ve been solo-developing my own AI chatbot platform, and I just finished a new character creation system I’m really excited about.
(Screenshot is just a draft image for future UI enhancement.)
Here’s what it can do right now:
Multi-language greetings - make your character say hello ... | 2025-08-13T18:59:50 | RIPT1D3_Z | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpdate | false | null | t3_1mpdate | /r/LocalLLaMA/comments/1mpdate/ai_character_creation_page_with_greetings/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qyqurhjt3uif1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/qyqurhjt3uif1.png?width=108&crop=smart&auto=webp&s=27d50d624f37132ddf0ffcff264c3121ace3d1d2', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/qyqurhjt3uif1.png?width=216&crop=smart&auto=web... | |
Claude being over agreeable! | 0 | Open Issue link: [https://github.com/anthropics/claude-code/issues/3382](https://github.com/anthropics/claude-code/issues/3382) | 2025-08-13T18:42:38 | Dramatic_Dentist_621 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpcu1k | false | null | t3_1mpcu1k | /r/LocalLLaMA/comments/1mpcu1k/claude_being_over_agreeable/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'cx3ovdeo0uif1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/cx3ovdeo0uif1.png?width=108&crop=smart&auto=webp&s=12d99e7d583d6e972cd3d8c8cd5a90df0da447a0', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/cx3ovdeo0uif1.png?width=216&crop=smart&auto=web... | |
Claude can be extremely agreeable! | 1 | Open Issue [https://github.com/anthropics/claude-code/issues/3382https://github.com/anthropics/claude-code/issues/3382https://github.com/anthropics/claude-code/issues/3382https://github.com/anthropics/claude-code/issues/3382](https://github.com/anthropics/claude-code/issues/3382https://github.com/anthropics/claude-code... | 2025-08-13T18:41:22 | Dramatic_Dentist_621 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpcssy | false | null | t3_1mpcssy | /r/LocalLLaMA/comments/1mpcssy/claude_can_be_extremely_agreeable/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'xbwdzm4c0uif1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/xbwdzm4c0uif1.png?width=108&crop=smart&auto=webp&s=1c544abfd1fe2a7fc3e2e970df91ae5c77fbdd6e', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/xbwdzm4c0uif1.png?width=216&crop=smart&auto=web... | |
scandal, the good thing is that we use local, the gpt-5 model is not for everyone | 0 | Thanks to user chetaslua on X we know this:
ChatGPT does not provide a complete model for paying users, for Plus and Team they provide a model that can be said to be medium and the complete model is available in the Pro plan, you look
https://preview.redd.it/3uklgptcztif1.png?width=588&format=png&auto=webp&s=fb54896d... | 2025-08-13T18:34:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mpcmdu/scandal_the_good_thing_is_that_we_use_local_the/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpcmdu | false | null | t3_1mpcmdu | /r/LocalLLaMA/comments/1mpcmdu/scandal_the_good_thing_is_that_we_use_local_the/ | false | false | 0 | null | |
self hosted llm chat interface and API | 9 | hopefully useful for some more people - [https://github.com/complexity-science-hub/llm-in-a-box-template/](https://github.com/complexity-science-hub/llm-in-a-box-template/) this is a tempalte I am curating to make a local LLM experience easy it consists of
A **flexible Chat UI** [OpenWebUI](https://docs.openwebui.... | 2025-08-13T18:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mpccet/self_hosted_llm_chat_interface_and_api/ | geoheil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpccet | false | null | t3_1mpccet | /r/LocalLLaMA/comments/1mpccet/self_hosted_llm_chat_interface_and_api/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'r_-7yLWHprFp3avkIsaFL46HsvD2lhGd1YRkcrtr3ps', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_-7yLWHprFp3avkIsaFL46HsvD2lhGd1YRkcrtr3ps.png?width=108&crop=smart&auto=webp&s=ddcf27aa9442b772386baced2a0fc20b436b8d1f', 'width': 108}, {'height': 108, 'url': 'h... |
Native tool calling with gpt-oss? | 5 | Has anyone actually managed to get native tool calling working with gpt-oss?
Can’t seem to get it to play nice with Llama.cpp or vLLM with either Open WebUI or LibreChat.
Thanks in advance! | 2025-08-13T18:15:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mpc3f6/native_tool_calling_with_gptoss/ | Tyme4Trouble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpc3f6 | false | null | t3_1mpc3f6 | /r/LocalLLaMA/comments/1mpc3f6/native_tool_calling_with_gptoss/ | false | false | self | 5 | null |
Fastest local websearch? | 1 | hey gang, was working on cutting cords some and I'm looking for the fastest web search LLM integration you've used?
Thinking Jan might be the way to go, but want to hear opinions. | 2025-08-13T18:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mpbu1o/fastest_local_websearch/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpbu1o | false | null | t3_1mpbu1o | /r/LocalLLaMA/comments/1mpbu1o/fastest_local_websearch/ | false | false | self | 1 | null |
AMD Dominates Top 10 in Openbenchmark CPU llama.cpp | 12 | This is definitely worth to see for people who wanna spend some money in some hardware that are good for AI and general purpose.
Unfortunately this test is not so popular and only small models were tested, but since it's open source so I hope more and more people will contribute to it adding more models and different ... | 2025-08-13T18:05:33 | https://openbenchmarking.org/test/pts/llama-cpp-2.1.1 | yuri_rds | openbenchmarking.org | 1970-01-01T00:00:00 | 0 | {} | 1mpbtvf | false | null | t3_1mpbtvf | /r/LocalLLaMA/comments/1mpbtvf/amd_dominates_top_10_in_openbenchmark_cpu_llamacpp/ | false | false | default | 12 | null |
Don't really know why people are complaining about gpt-oss-120b being censored | 0 | 2025-08-13T18:05:33 | bucolucas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mpbtv8 | false | null | t3_1mpbtv8 | /r/LocalLLaMA/comments/1mpbtv8/dont_really_know_why_people_are_complaining_about/ | true | false | spoiler | 0 | {'enabled': True, 'images': [{'id': 'j6dw9kb1utif1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/j6dw9kb1utif1.png?width=108&crop=smart&auto=webp&s=ecb5f83a473cf750cf9b028a0a3a6db1cde933c5', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/j6dw9kb1utif1.png?width=216&crop=smart&auto=web... | ||
GPU questions for local llm | 1 | [removed] | 2025-08-13T17:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mpbje6/gpu_questions_for_local_llm/ | Big_black_click | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpbje6 | false | null | t3_1mpbje6 | /r/LocalLLaMA/comments/1mpbje6/gpu_questions_for_local_llm/ | false | false | self | 1 | null |
GPU for AI models | 1 | [removed] | 2025-08-13T17:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mpbbei/gpu_for_ai_models/ | Big_black_click | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpbbei | false | null | t3_1mpbbei | /r/LocalLLaMA/comments/1mpbbei/gpu_for_ai_models/ | false | false | self | 1 | null |
Why is censorship a problem on local models? | 0 | So I'll be completely up front: I have zero technical knowledge about any of this beyond what I've read here. All I do is run LM Studio (on my ultrabook's integrated GPU!) and play with models for fun.
I really don't understand why people get so worked up about censorship on local models. I doubt I've found some a... | 2025-08-13T17:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mpb9q9/why_is_censorship_a_problem_on_local_models/ | BewareOfHorses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpb9q9 | false | null | t3_1mpb9q9 | /r/LocalLLaMA/comments/1mpb9q9/why_is_censorship_a_problem_on_local_models/ | false | false | self | 0 | null |
Advice needed: system only posts with up to 3 cards | 1 | Hi there! I could really use some of your expert opinions on this, as it's driving me crazy.
I have some spare parts and cards lying around the house, and I've tried to put them together only to run into a baffling problem. Here's the setup:
* EVGA Supernova 1600 P+
* ASRock Z690 (rebar enabled)
* Intel Core i7-1270... | 2025-08-13T17:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mpb23b/advice_needed_system_only_posts_with_up_to_3_cards/ | NetworkAuditor2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpb23b | false | null | t3_1mpb23b | /r/LocalLLaMA/comments/1mpb23b/advice_needed_system_only_posts_with_up_to_3_cards/ | false | false | self | 1 | null |
Dual GPU Setup for LLMs – Notes from a Newbie | 24 | Some learnings I made the hard way. These points might be obvious to some, but I wasn’t fully aware of them before I built my LLM workstation. Hopefully this helps other newbies like me.
**Context**:
I was using my AMD RX 6800 mostly for LLM workloads and wanted more VRAM to test larger models. I built a PC to accomm... | 2025-08-13T17:36:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mpb0mo/dual_gpu_setup_for_llms_notes_from_a_newbie/ | DrRamorey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpb0mo | false | null | t3_1mpb0mo | /r/LocalLLaMA/comments/1mpb0mo/dual_gpu_setup_for_llms_notes_from_a_newbie/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'WUalZN1TdIL1dkm5wwGa9RTTUx28Ef-CzX4q_H8zJqg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WUalZN1TdIL1dkm5wwGa9RTTUx28Ef-CzX4q_H8zJqg.png?width=108&crop=smart&auto=webp&s=1fe4adc03bb1276d6118a5876b84d41aa796ebfe', 'width': 108}, {'height': 108, 'url': 'h... |
Fully local Qwen 2.5 Omni realtime agent (sees + talks).... tested it by cooking dinner | 219 | First off, I love Qwen 2.5 Omni. Pushed it pretty hard to get a fully local AI agent running that can see (webcam) and talk back in real-time. Super impressed with how fast and versatile it is overall.
I got a full local pipeline working and had it help me come up with dinner ideas:
* Input: Webcam feed processed fr... | 2025-08-13T17:34:17 | https://v.redd.it/m9ttqovtmtif1 | Weary-Wing-6806 | /r/LocalLLaMA/comments/1mpayu9/fully_local_qwen_25_omni_realtime_agent_sees/ | 1970-01-01T00:00:00 | 0 | {} | 1mpayu9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m9ttqovtmtif1/DASHPlaylist.mpd?a=1757828064%2CM2FhOGNjNTIyZjIxODcxNTExYTdjMDIxM2IzMzY5YjNjYzc1NDg0ZmQ3ZTY0ZTRlMDJmMDZhMzc0OWZiNTQxOQ%3D%3D&v=1&f=sd', 'duration': 140, 'fallback_url': 'https://v.redd.it/m9ttqovtmtif1/DASH_1080.mp4?source=fallback', '... | t3_1mpayu9 | /r/LocalLLaMA/comments/1mpayu9/fully_local_qwen_25_omni_realtime_agent_sees/ | false | false | 219 | {'enabled': False, 'images': [{'id': 'NTd2N2twdnRtdGlmMXnk9Y57JoTaUjqhxZO6PSdtyiDJseK_nxh9m4-CzVh9', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NTd2N2twdnRtdGlmMXnk9Y57JoTaUjqhxZO6PSdtyiDJseK_nxh9m4-CzVh9.png?width=108&crop=smart&format=pjpg&auto=webp&s=90a54fed880a9e07164a8215f5869191fb2a1... | |
GPU upgrade help | 3 | I am currently looking to upgrade my GPU in my server to better run a local llm i currently have a 2060 super 8gb in the system and am looking at upgrading to rx 6800 or rx 7600 xt both used are around the 300 dollar mark. On paper the rx 6800 looks like a better deal but I don't know if it js better for AI an workload... | 2025-08-13T17:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mpatpt/gpu_upgrade_help/ | Senyin10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpatpt | false | null | t3_1mpatpt | /r/LocalLLaMA/comments/1mpatpt/gpu_upgrade_help/ | false | false | self | 3 | null |
Matrix-Game 2.0 — first open-source, real-time, long-sequence interactive world model. 25 FPS, minutes-long interaction | 144 | 2025-08-13T17:10:32 | https://x.com/Skywork_ai/status/1955237399912648842?t=hsxnA2t2FyKxRsSRBCJ1kA&s=19 | abdouhlili | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mpabh1 | false | null | t3_1mpabh1 | /r/LocalLLaMA/comments/1mpabh1/matrixgame_20_first_opensource_realtime/ | false | false | default | 144 | null | |
I am building a semantic file search engine using Qwen0.6b with recently released LangExtract tool! | 37 | Hey guys,
I am a long time lurker and quite active here on LocalLlama. I am starting to build a tool called 'monkeSearch'. Essentially an open source local file search engine where you can type english sentences or even broken keywords related to your file (typing like a "monke" essentially) and you get the closest m... | 2025-08-13T17:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mpa1ew/i_am_building_a_semantic_file_search_engine_using/ | fuckAIbruhIhateCorps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpa1ew | false | null | t3_1mpa1ew | /r/LocalLLaMA/comments/1mpa1ew/i_am_building_a_semantic_file_search_engine_using/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'l26S5nt-U8KfOY_GWKDJSOzb_2xdSlDoF2CnOuQm95k', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/l26S5nt-U8KfOY_GWKDJSOzb_2xdSlDoF2CnOuQm95k.png?width=108&crop=smart&auto=webp&s=74d4acd104aa947013c1936bffd744458d033e0d', 'width': 108}, {'height': 216, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.