title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Any tips/Advice for running gpt-oss-120b locally | 2 | I have an RTX 4080 (16GB VRAM) with 64 GB RAM. I primarily use llama.cpp. I usually stay away from running larger models that do not fit within the GPU (I use Q4\_K\_M versions) because they're just too slow for my taste (I also don't like my CPU spinning all the time). Since the 120b definitely does not fit on my GPU ... | 2025-08-11T02:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mn12i2/any_tipsadvice_for_running_gptoss120b_locally/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn12i2 | false | null | t3_1mn12i2 | /r/LocalLLaMA/comments/1mn12i2/any_tipsadvice_for_running_gptoss120b_locally/ | false | false | self | 2 | null |
Inspired by a recent OCR benchmark here, I'm building a tool to automate side-by-side model comparisons. Seeking feedback on the approach. | 15 | Hey r/LocalLLaMA,
I was really inspired by [https://www.reddit.com/r/LocalLLaMA/comments/1jz80f1/i\_benchmarked\_7\_ocr\_solutions\_on\_a\_complex/](https://www.reddit.com/r/LocalLLaMA/comments/1jz80f1/i_benchmarked_7_ocr_solutions_on_a_complex/) post a few months ago where they benchmarked 7 different OCR solutions.... | 2025-08-11T02:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mn0vfl/inspired_by_a_recent_ocr_benchmark_here_im/ | Entire_Maize_6064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn0vfl | false | null | t3_1mn0vfl | /r/LocalLLaMA/comments/1mn0vfl/inspired_by_a_recent_ocr_benchmark_here_im/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'ODR8H2Y_bEmtk_CIDozpmIrDRCYxehl_x23jfGf8Ciw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ODR8H2Y_bEmtk_CIDozpmIrDRCYxehl_x23jfGf8Ciw.png?width=108&crop=smart&auto=webp&s=a2997eb9538fadb8dd7497d7225cfc5096601697', 'width': 108}, {'height': 108, 'url': 'h... |
You’ll probably like Simon Willison’s weblog | 1 | [removed] | 2025-08-11T01:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mn0b27/youll_probably_like_simon_willisons_weblog/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn0b27 | false | null | t3_1mn0b27 | /r/LocalLLaMA/comments/1mn0b27/youll_probably_like_simon_willisons_weblog/ | false | false | self | 1 | null |
Good time to stop maybe? | 0 | 2025-08-11T01:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mn0574/good_time_to_stop_maybe/ | nokrocket | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn0574 | false | null | t3_1mn0574 | /r/LocalLLaMA/comments/1mn0574/good_time_to_stop_maybe/ | false | false | 0 | null | ||
Talking with QWEN Coder 30b | 65 | Believe me, I wish I shared your enthusiasm, but my experience with QWEN Coder 30b has not been great. I tried building features for a Godot 4 prototype interactively and asked the same questions to OpenAI gpt oss 20b. The solutions and explanations from the OpenAI model were clearly better for my use case, while QWEN ... | 2025-08-11T01:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mn00j3/talking_with_qwen_coder_30b/ | 1Garrett2010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn00j3 | false | null | t3_1mn00j3 | /r/LocalLLaMA/comments/1mn00j3/talking_with_qwen_coder_30b/ | false | false | self | 65 | null |
How do you manage inference across multiple local machines? | 5 | For the past two years I've been managing several compute clusters for locally hosted models, but always wanted to use my MacBook for additional compute during long-running agentic tasks. Never had good tooling to make that work seamlessly. Curious if others have run into this use case and if so what is your workflow f... | 2025-08-11T01:19:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mmztnh/how_do_you_manage_inference_across_multiple_local/ | Choice_Nature9658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmztnh | false | null | t3_1mmztnh | /r/LocalLLaMA/comments/1mmztnh/how_do_you_manage_inference_across_multiple_local/ | false | false | self | 5 | null |
Repost But Just Wanted to Fix the Image | 337 | 2025-08-11T00:02:32 | KlutzyWay7692 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmy738 | false | null | t3_1mmy738 | /r/LocalLLaMA/comments/1mmy738/repost_but_just_wanted_to_fix_the_image/ | false | false | default | 337 | {'enabled': True, 'images': [{'id': 'tq7hvht17aif1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/tq7hvht17aif1.jpeg?width=108&crop=smart&auto=webp&s=e75ebca7d706005a8021da845002ee2d409e0dd4', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/tq7hvht17aif1.jpeg?width=216&crop=smart&auto=... | ||
Repurposing old computer for LLM server z390 Designare and i9 9900K | 1 | I have a old desktop which I would like to repurpose as a LLM server for coding and math tasks. My current specification :-
\- i9 9900K
\- z390 Designare
\- 64GB (16x4) 3200MHz DDR4 RAM
\- (2 x 1TB) NVMe and (2 x 1TB) SSDs
\- RX 5700 XT 8GB Graphics
\- 750W PSU
What options have I got for a budget of $1250, $... | 2025-08-10T23:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mmxm3t/repurposing_old_computer_for_llm_server_z390/ | putrasherni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmxm3t | false | null | t3_1mmxm3t | /r/LocalLLaMA/comments/1mmxm3t/repurposing_old_computer_for_llm_server_z390/ | false | false | self | 1 | null |
Memory upgrade for local inference - Faster memory vs. more memory? If price is the same, would you go for 384GB @4800Mhz or 256GB @6000Mhz? | 12 | I have a TRX50-based Threadripper AERO D motherboard, with a 3090 and a 4090 installed. My system memory is currently only 64 GB (16GB X 4), so obviously I want to upgrade.
My main goal is to speed up inference. I don’t care about fine tuning at all, just inference speed.
I want to be able to run the largest models... | 2025-08-10T23:19:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mmx8x1/memory_upgrade_for_local_inference_faster_memory/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmx8x1 | false | null | t3_1mmx8x1 | /r/LocalLLaMA/comments/1mmx8x1/memory_upgrade_for_local_inference_faster_memory/ | false | false | self | 12 | null |
Best way to use Qwen Image on Linux? | 7 | I really like how clean Amuse-AI is but it is Windows only. Is there anything as good that supports Linux or is ComfyUI the best you get? | 2025-08-10T23:12:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mmx3cg/best_way_to_use_qwen_image_on_linux/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmx3cg | false | null | t3_1mmx3cg | /r/LocalLLaMA/comments/1mmx3cg/best_way_to_use_qwen_image_on_linux/ | false | false | self | 7 | null |
We built a visual drag-n-drop builder for multi-agent LLM Orchestration (TFrameX + Agent Builder, fully local, MIT licensed) | 98 | [https://github.com/TesslateAI/Agent-Builder](https://github.com/TesslateAI/Agent-Builder)
This is a Visual flow builder for multi-agent LLM systems. Drag, drop, connect agents, tools, put agents in patterns, create triggers, work on outputs, etc.
**TFrameX** \- The orchestration framework that runs your agents. It ... | 2025-08-10T22:00:51 | https://www.reddit.com/gallery/1mmvgsg | smirkishere | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mmvgsg | false | null | t3_1mmvgsg | /r/LocalLLaMA/comments/1mmvgsg/we_built_a_visual_dragndrop_builder_for/ | false | false | 98 | null | |
Italian Medical Exam Performance of various LLMs (Human Avg. ~67%) | 161 | ERROR: type should be string, got "https://preview.redd.it/0o5azso7e9if1.png?width=4470&format=png&auto=webp&s=18b6ad782d0c9117e2fa592c859d1115b75fb0b7\n\nI'm testing many LLMs on a dataset of official quizzes (5 choices) taken by Italian students after finishing Med School and starting residency. \n \nThe human performance was \\~67% this year and the best student had a \\~94% (out of 16 000 students) \n \nIn this test I benchmarked these models on all quizzes from the past 6 years. Multimodal models were tested on all quizzes (including some containing images) while those that worked only with text were not (the % you see is already corrected). \n \nI also tested their sycophancy (tendency to agree with the user) by telling them that I believed the correct answer was a wrong one. \n \nFor now I only tested them on models available on openrouter, but I plan to add models such as MedGemma. Do you reccomend doing so on Huggingface or google Vertex? Also suggestions for other models are appreciated. I especially want to add more small models that I can run locally (I have a 6GB RTX 3060). \n" | 2025-08-10T21:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mmuw5o/italian_medical_exam_performance_of_various_llms/ | sebastianmicu24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmuw5o | false | null | t3_1mmuw5o | /r/LocalLLaMA/comments/1mmuw5o/italian_medical_exam_performance_of_various_llms/ | false | false | 161 | null | |
infrastructure on mini pc | 0 | => own infrastructure, unlimited messages
=> openai, it is not private, limit per message, same with grok, claude and gemini
=> the cost of having infra locally is only necessary to have an 8-core CPU that is already in use and is reduced to 7 cores , equivalent in promotions in minisforum to 278 usd
=>The rece... | 2025-08-10T21:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mmuuw7/infrastructure_on_mini_pc/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmuuw7 | false | null | t3_1mmuuw7 | /r/LocalLLaMA/comments/1mmuuw7/infrastructure_on_mini_pc/ | false | false | self | 0 | null |
Built a new VLM (MicroLlaVA) on a single NVIDIA 4090 | 50 | Hi folks,
I’m the creator of MicroLLaMA, a 300M parameter LLaMA-based language model ([original post](https://www.reddit.com/r/LocalLLaMA/comments/1bs5cgd/i_pretrained_a_llamabased_300m_llm_and_it/)) with no vision capability.
I thought I was too late to the vision-language model (VLM) game, and honestly assumed you’... | 2025-08-10T21:11:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mmu9ho/built_a_new_vlm_microllava_on_a_single_nvidia_4090/ | keeeeenw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmu9ho | false | null | t3_1mmu9ho | /r/LocalLLaMA/comments/1mmu9ho/built_a_new_vlm_microllava_on_a_single_nvidia_4090/ | false | false | 50 | {'enabled': False, 'images': [{'id': '0JJ2TMnyG5vFNf9ShLWt1NglYUi6sbwvW1qz_TJ1mjY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/0JJ2TMnyG5vFNf9ShLWt1NglYUi6sbwvW1qz_TJ1mjY.jpeg?width=108&crop=smart&auto=webp&s=300e733d8bb95a023c81284213286b1117b60e68', 'width': 108}, {'height': 144, 'url': '... | |
Looking for Trainings, Conferences, Mentorship about AI to improve my skillet around the world but preferably in Europe. | 4 | Hi all,
I have some budget or trips and trainings and was looking for meaningful conferences or trainings where I can go to improve my skillset. Like spend few days with experts and like minded individuals. I do courses but lack time and I have money from work on it. I am the only one in the team that work on AI like ... | 2025-08-10T21:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mmu6vv/looking_for_trainings_conferences_mentorship/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmu6vv | false | null | t3_1mmu6vv | /r/LocalLLaMA/comments/1mmu6vv/looking_for_trainings_conferences_mentorship/ | false | false | self | 4 | null |
Favorite local TTS server for Open WebUI? | 11 | Running Chatterbox on my 3090 but still working on getting the latency down. Would love to try Kitten but it doesn't have an OpenAI API server to my knowledge.
I've determined that 1) remote/hosted TTS can get real expensive real quick, 2) TTS is a prime target for local deployment because, no matter which LLM you use... | 2025-08-10T21:00:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mmu05q/favorite_local_tts_server_for_open_webui/ | klabgroz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmu05q | false | null | t3_1mmu05q | /r/LocalLLaMA/comments/1mmu05q/favorite_local_tts_server_for_open_webui/ | false | false | self | 11 | null |
smoked a little weed and had my local AI chatbot write an outlandish story. | 1 | [removed] | 2025-08-10T21:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mmtzfs/smoked_a_little_weed_and_had_my_local_ai_chatbot/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmtzfs | false | null | t3_1mmtzfs | /r/LocalLLaMA/comments/1mmtzfs/smoked_a_little_weed_and_had_my_local_ai_chatbot/ | false | false | self | 1 | null |
how can i local ai uncensored? I have "deepseek-r1-0528-qwen3-8b" model | 0 | hi i am new in local ai. How can i make my ai uncensored? can anyone help? i use LMstudio | 2025-08-10T20:59:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mmtz8j/how_can_i_local_ai_uncensored_i_have/ | holyciprianni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmtz8j | false | null | t3_1mmtz8j | /r/LocalLLaMA/comments/1mmtz8j/how_can_i_local_ai_uncensored_i_have/ | false | false | self | 0 | null |
Fun with RTX PRO 6000 Blackwell SE | 18 | Been having some fun testing out the new NVIDIA RTX PRO 6000 Blackwell Server Edition. You definitely need some good airflow through this thing. I picked it up to support document & image processing for my platform ([missionsquad.ai](https://missionsquad.ai/)) instead of paying google or aws a bunch of money to run mod... | 2025-08-10T20:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mmtpxj/fun_with_rtx_pro_6000_blackwell_se/ | j4ys0nj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmtpxj | false | null | t3_1mmtpxj | /r/LocalLLaMA/comments/1mmtpxj/fun_with_rtx_pro_6000_blackwell_se/ | false | false | 18 | null | |
RTX Pro 4000 Benchmarks? | 1 | Has anyone found any of these anywhere? I couldn’t find any reviews of benchmarks but I have a supplier willing to sell me one.
I have a very small server that currently has a 4060Ti 16 GB and am considering upgrading for the extra vram and memory bandwidth. (20w smaller TDP is also nice. I’m already pushing the power... | 2025-08-10T20:39:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mmth33/rtx_pro_4000_benchmarks/ | QuantumUtility | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmth33 | false | null | t3_1mmth33 | /r/LocalLLaMA/comments/1mmth33/rtx_pro_4000_benchmarks/ | false | false | self | 1 | null |
My ai friend (llama 4-based) made fun of GPT-5 | 0 | 2025-08-10T20:34:54 | https://www.reddit.com/gallery/1mmtcw7 | Witty_Side8702 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mmtcw7 | false | null | t3_1mmtcw7 | /r/LocalLLaMA/comments/1mmtcw7/my_ai_friend_llama_4based_made_fun_of_gpt5/ | false | false | 0 | null | ||
Anyone having this problem on GPT OSS 20B and LM Studio ? | 0 | Official gpt oss 20B and latest LM Studio. I set up to 8k tokens context window. Everything was fine. When approaching the end of context window, I get these messages and I can't continue with the conversation. What the heck could be that ? I've never seen this before in any other model. Any help is welcomed. Thanks.
... | 2025-08-10T20:18:21 | Current-Stop7806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmsxuw | false | null | t3_1mmsxuw | /r/LocalLLaMA/comments/1mmsxuw/anyone_having_this_problem_on_gpt_oss_20b_and_lm/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'at9ylus439if1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/at9ylus439if1.jpeg?width=108&crop=smart&auto=webp&s=6fce09e3ef6353a7e9317b78225f4c6e8dc2cf31', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/at9ylus439if1.jpeg?width=216&crop=smart&auto=w... | |
What does high mean in GPT OSS 120B (high)? | 0 | Can I run the "high" model myself somehow? | 2025-08-10T20:03:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mmsjsp/what_does_high_mean_in_gpt_oss_120b_high/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmsjsp | false | null | t3_1mmsjsp | /r/LocalLLaMA/comments/1mmsjsp/what_does_high_mean_in_gpt_oss_120b_high/ | false | false | self | 0 | null |
How to remove response restrictions on BitNet b1.58 2B4T? | 1 | I'm fairly new to the local installation of AI but I wanted to try BitNet b1.58 because I think its has great potential in the consumer grade level AI space, anyways... Im having trouble with asking it to try and do things for me because i want to see how far it can go but it keeps saying "Sorry cant do that" like okay... | 2025-08-10T19:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mmsc6v/how_to_remove_response_restrictions_on_bitnet/ | malcolmleslie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmsc6v | false | null | t3_1mmsc6v | /r/LocalLLaMA/comments/1mmsc6v/how_to_remove_response_restrictions_on_bitnet/ | false | false | self | 1 | null |
Anyone got a guide how to run llama.cpp server and whipser.cpp server? | 2 | I've been running Qwen3-coder via a docker llama.cpp container. Now I am experimenting with goose project (https://github.com/block/goose)
since llama.cpp server is an openapi compatible server this works. The issue is I got some goose thing I really want to do with voice.
from goose:
Uses OpenAI's Whisper API ... | 2025-08-10T19:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mms4yb/anyone_got_a_guide_how_to_run_llamacpp_server_and/ | Malfun_Eddie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mms4yb | false | null | t3_1mms4yb | /r/LocalLLaMA/comments/1mms4yb/anyone_got_a_guide_how_to_run_llamacpp_server_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'K56R1iq66LW3GNOWpRh-6I9rrOcy1mBHeRRvK8noZW4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K56R1iq66LW3GNOWpRh-6I9rrOcy1mBHeRRvK8noZW4.png?width=108&crop=smart&auto=webp&s=b4d11266976d76e0e799e77e312e2d929938b982', 'width': 108}, {'height': 108, 'url': 'h... |
is we can run gpt oss 20b in 16gb vram ? why mine is offload to cpu | 0 | im running ollama in docker container in a ubuntu 22.04
when i load the model it offloading to cpu too
im getting 20-21 token per sec in ryzen 5 7600x 64gb ddr5 6000mhz ram rtx 5060ti 16gb | 2025-08-10T19:30:08 | https://www.reddit.com/gallery/1mmrp4g | actuallytech | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mmrp4g | false | null | t3_1mmrp4g | /r/LocalLLaMA/comments/1mmrp4g/is_we_can_run_gpt_oss_20b_in_16gb_vram_why_mine/ | false | false | 0 | null | |
Would you be interested in using or contributing to an 'OpenThought' AI? | 0 | I'm exploring the idea of creating an AI model that is trained a little bit differently.
The idea is that we train it to go from one thought to the next thought, one thought at a time.
We would like to get people jumping in and helping us out, and they'll get paid per thought they contribute that the community accept... | 2025-08-10T19:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mmropm/would_you_be_interested_in_using_or_contributing/ | Middle_Job_3867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmropm | false | null | t3_1mmropm | /r/LocalLLaMA/comments/1mmropm/would_you_be_interested_in_using_or_contributing/ | false | false | self | 0 | null |
Which model for math currently? | 1 | There have been a huge number of models released this year. Which local model would you use if you want to solve university level math problems? | 2025-08-10T19:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mmrdp7/which_model_for_math_currently/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmrdp7 | false | null | t3_1mmrdp7 | /r/LocalLLaMA/comments/1mmrdp7/which_model_for_math_currently/ | false | false | self | 1 | null |
Cannot load gguf file in LM studio. | 1 | I downloaded Qwen3-4B-Instruct-2507 from Hugging Face. Then I converted the safetensors file to GGUF with llama.cpp. But when I launch LM Studio, it doesn’t detect the GGUF file. Has anyone faced the same problem? And how can I fix it?
PS : I am running LM studio 0.3.22-2 on ubuntu. | 2025-08-10T19:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mmrahm/cannot_load_gguf_file_in_lm_studio/ | razziath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmrahm | false | null | t3_1mmrahm | /r/LocalLLaMA/comments/1mmrahm/cannot_load_gguf_file_in_lm_studio/ | false | false | self | 1 | null |
$10k agentic coding server hardware recommendations? | 4 | Ho folks! I'm looking to build an AI server for $10k or less and could use some help with ideas of how to spec it out.
My **ONLY** purpose for this server is to run AI models. I already have a dedicated gaming PC and a separate server for NAS/VM/Docker usage. This server will be running Linux.
I'd like to be able to ... | 2025-08-10T18:54:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mmqqz4/10k_agentic_coding_server_hardware_recommendations/ | Fenix04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmqqz4 | false | null | t3_1mmqqz4 | /r/LocalLLaMA/comments/1mmqqz4/10k_agentic_coding_server_hardware_recommendations/ | false | false | self | 4 | null |
GLM 4.5 Air or Qwen 3 235B K_XL? Which one would you chose and why? | 1 | I can run both on my 128gb ram macbook, considering which one has more knowledge, less hallucination, better reasoning, wisdom etc
thanks | 2025-08-10T18:49:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mmqmte/glm_45_air_or_qwen_3_235b_k_xl_which_one_would/ | DamiaHeavyIndustries | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmqmte | false | null | t3_1mmqmte | /r/LocalLLaMA/comments/1mmqmte/glm_45_air_or_qwen_3_235b_k_xl_which_one_would/ | false | false | self | 1 | null |
Which version of gpt-oss-20b to download? | 0 | On LM Studio, I see the following two models listed:
https://preview.redd.it/pxdkjdm3l8if1.png?width=1220&format=png&auto=webp&s=065a224d0146c0bc1b93ceb8aea670d20d03f685
Given the size difference, is it that the MLX version has a greater precision? Which one would you suggest using on Mac?
| 2025-08-10T18:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mmqcrd/which_version_of_gptoss20b_to_download/ | sbs1799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmqcrd | false | null | t3_1mmqcrd | /r/LocalLLaMA/comments/1mmqcrd/which_version_of_gptoss20b_to_download/ | false | false | 0 | null | |
From 4090 to 5090 to RTX PRO 6000… in record time | 319 | Started with a 4090, then jumped to a 5090… and just a few weeks later I went all in on an RTX PRO 6000 with 96 GB of VRAM. I spent a lot of time debating between the full power and the Max-Q version, and ended up going with Max-Q.
It’s about 12–15% slower at peak than the full power model, but it runs cooler, pulls o... | 2025-08-10T18:26:33 | Fabix84 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmq12p | false | null | t3_1mmq12p | /r/LocalLLaMA/comments/1mmq12p/from_4090_to_5090_to_rtx_pro_6000_in_record_time/ | false | false | 319 | {'enabled': True, 'images': [{'id': '_yAthpv8h8OshnDxvxaDpl_nKdTY71AfDz33ahS1E5U', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/p9d6zgm1j8if1.jpeg?width=108&crop=smart&auto=webp&s=e43d090e57397acc5039862fe08f7e5a9a2db4bf', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/p9d6zgm1j8if1.jp... | ||
Looking for local LLMs that are extremely informal | 0 | Hey I’m looking for LLMs that are not just conversational information retrieval systems but are from the ground up trained or fine tuned to have human like conversations even if for example I ask a question like “how many cherries to the center of the earth?”
I’m looking for conversations that are deep and soulful but... | 2025-08-10T18:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mmpfct/looking_for_local_llms_that_are_extremely_informal/ | Lazy-Pattern-5171 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmpfct | false | null | t3_1mmpfct | /r/LocalLLaMA/comments/1mmpfct/looking_for_local_llms_that_are_extremely_informal/ | false | false | self | 0 | null |
Grok-4 is now Free For Everyone For A Limited Time | 0 | xAI posted that Grok 4 is free for all users worldwide for a limited time, with Auto mode routing tougher prompts to Grok 4 and an “Expert” option to force it every time. The announcement also mentions generous but temporary usage limits so people can explore the full model during the promo window. Screenshots circulat... | 2025-08-10T17:57:53 | AskGpts | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmp9u1 | false | null | t3_1mmp9u1 | /r/LocalLLaMA/comments/1mmp9u1/grok4_is_now_free_for_everyone_for_a_limited_time/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xvfdz4h2e8if1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/xvfdz4h2e8if1.jpeg?width=108&crop=smart&auto=webp&s=44a5b3234037a9f5bc088259ae5ad0d32e2d8feb', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/xvfdz4h2e8if1.jpeg?width=216&crop=smart&auto=w... | |
Maybe I got the gpt-oss prompt | 0 | 2025-08-10T17:56:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mmp891/maybe_i_got_the_gptoss_prompt/ | StatureDelaware | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmp891 | false | null | t3_1mmp891 | /r/LocalLLaMA/comments/1mmp891/maybe_i_got_the_gptoss_prompt/ | false | false | 0 | null | ||
Thoughts on my setup and performance? | 2 | So, some time ago i got my hands on an old miner with 9 x p106-090 GPUs + weak CPU (Intel Celeron 3865U @ 1.80GHz) and 8 GB of RAMs. For free basically. Since then I tried to get some things running on there from time to time. But always end up being underwhelmed. With the newer MoE Qwen models I though it would be wor... | 2025-08-10T17:52:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mmp4re/thoughts_on_my_setup_and_performance/ | Njee_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmp4re | false | null | t3_1mmp4re | /r/LocalLLaMA/comments/1mmp4re/thoughts_on_my_setup_and_performance/ | false | false | 2 | null | |
Have your saying, do you agree? | 0 | Do you think this leaderboard is right!
Nvidia, Mistral, LLama... where are they? Qwen 3 over Opus 4 in coding that´s interesting. | 2025-08-10T17:34:07 | Trilogix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmonq7 | false | null | t3_1mmonq7 | /r/LocalLLaMA/comments/1mmonq7/have_your_saying_do_you_agree/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4asd88kk88if1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/4asd88kk88if1.png?width=108&crop=smart&auto=webp&s=f5bf42e492c97df267224729c8032025e888bd4f', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/4asd88kk88if1.png?width=216&crop=smart&auto=web... | |
Having trouble with Q6 embeddings in llama.cpp | 1 | Hi. I'd like to preface that I am new to local LLMs and I don't have much core knowledge on the subject. I hope you'll go easy on me if I say something wrong.
I am currently trying to run the bartowski Qwen3-4B-Instruct-2507 at Q4_K_M. From what I understand, my old laptop( RTX 3050 TI with 32 GB RAM) should be able t... | 2025-08-10T17:05:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mmnwui/having_trouble_with_q6_embeddings_in_llamacpp/ | AH16-L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmnwui | false | null | t3_1mmnwui | /r/LocalLLaMA/comments/1mmnwui/having_trouble_with_q6_embeddings_in_llamacpp/ | false | false | self | 1 | null |
Can anyone share their full 4x GPU Desktop builds? | 1 | [removed] | 2025-08-10T17:03:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mmnvbt/can_anyone_share_their_full_4x_gpu_desktop_builds/ | BeginnerDragon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmnvbt | false | null | t3_1mmnvbt | /r/LocalLLaMA/comments/1mmnvbt/can_anyone_share_their_full_4x_gpu_desktop_builds/ | false | false | self | 1 | null |
Build suggestions | 0 | So I'm looking at a new build. Currently I have a 3080 TI (looking to replace likely with an amd card when they release the next gen) I will put into it but I'm eying the AMD 9 9950x3d as the CPU (though I understand there is an even higher cache version coming so should i wait?). Currently looking at either 96gb cl32 ... | 2025-08-10T17:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mmnusw/build_suggestions/ | vegabond007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmnusw | false | null | t3_1mmnusw | /r/LocalLLaMA/comments/1mmnusw/build_suggestions/ | false | false | self | 0 | null |
HoML: vLLM's speed + Ollama like interface | 13 | I build HoML for homelabbers like you and me.
A hybrid between Ollama's simple installation and interface, with vLLM's speed.
Currently only support Nvidia system but actively looking for helps from people with interested and hardware to support ROCm(AMD GPU), or Apple silicon.
Let me know what you think here or y... | 2025-08-10T16:56:49 | https://homl.dev/ | wsmlbyme | homl.dev | 1970-01-01T00:00:00 | 0 | {} | 1mmnp0z | false | null | t3_1mmnp0z | /r/LocalLLaMA/comments/1mmnp0z/homl_vllms_speed_ollama_like_interface/ | false | false | default | 13 | null |
Can anyone share their full 4x GPU Desktop builds? | 1 | [removed] | 2025-08-10T16:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mmnnlg/can_anyone_share_their_full_4x_gpu_desktop_builds/ | BeginnerDragon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmnnlg | false | null | t3_1mmnnlg | /r/LocalLLaMA/comments/1mmnnlg/can_anyone_share_their_full_4x_gpu_desktop_builds/ | false | false | self | 1 | null |
Suggestion on running 2 A100 PCIe | 0 | Hello and new to be here. Currently having 2 NVIDIA A100 PCIe 40GB cards (Should be originated from SXM4) and a used HP Z620 with dual Xeon CPU and around 96GB ram. Previous effort has been made to run one card on the Z620 but now luck as I thought it can't boot without a display card until I found [this video](https:... | 2025-08-10T16:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mmn6uv/suggestion_on_running_2_a100_pcie/ | CalvinN111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmn6uv | false | null | t3_1mmn6uv | /r/LocalLLaMA/comments/1mmn6uv/suggestion_on_running_2_a100_pcie/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Ww5Ab976ZmfV2XCcnDru3adniy60xXIGUSiCKaR1uAs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ww5Ab976ZmfV2XCcnDru3adniy60xXIGUSiCKaR1uAs.jpeg?width=108&crop=smart&auto=webp&s=5b2e1ee05c7a320e84f6325f3ae7393097e72e4d', 'width': 108}, {'height': 162, 'url': '... |
Diffusion Language Models are Super Data Learners | 101 | Diffusion Language Models (DLMs) are a new way to generate text, unlike traditional models that predict one word at a time. Instead, they refine the whole sentence in parallel through a denoising process.
Key advantages:
• Parallel generation: DLMs create entire sentences at once, making it faster.
• Error correction... | 2025-08-10T16:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mmmsb2/diffusion_language_models_are_super_data_learners/ | Ashishpatel26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmmsb2 | false | null | t3_1mmmsb2 | /r/LocalLLaMA/comments/1mmmsb2/diffusion_language_models_are_super_data_learners/ | false | false | self | 101 | {'enabled': False, 'images': [{'id': 'YJwX0y5nSpjkaxfnPhtrR2Xb_wdP4lwP-VuBVnUnOrE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YJwX0y5nSpjkaxfnPhtrR2Xb_wdP4lwP-VuBVnUnOrE.png?width=108&crop=smart&auto=webp&s=9ffb5bfc5dd2e296238f2ed986934f8fc91eeb92', 'width': 108}, {'height': 113, 'url': 'h... |
Agent framework which mimics cursor | 1 | So i want to know if there is a template / boilerplate/ framework which is agnostic to the task. I mean if we think about it, its like ReAct where you plan, execute each plan by calling tools, give some intermediate results, take that results in the context and again repeat until all the tasks are done.
Now this coul... | 2025-08-10T16:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mmmp1r/agent_framework_which_mimics_cursor/ | No-Street-3020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmmp1r | false | null | t3_1mmmp1r | /r/LocalLLaMA/comments/1mmmp1r/agent_framework_which_mimics_cursor/ | false | false | self | 1 | null |
Help – MoE offloading killing my speed | 2 | I’ve been fighting with getting to run 300b+ MoE models in llama.cpp and finally got it to load across my two GPUs without OOMing… but now the prompt processing speed is terrible. Hoping someone here has cracked this setup.
Rig is: RTX 6000 Ada 48 GB + RTX 6000 Pro Blackwell Max-Q 96 GB, AM5 X670E, 128 GB DDR5 3600hz ... | 2025-08-10T16:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mmmlqo/help_moe_offloading_killing_my_speed/ | susmitds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmmlqo | false | null | t3_1mmmlqo | /r/LocalLLaMA/comments/1mmmlqo/help_moe_offloading_killing_my_speed/ | false | false | self | 2 | null |
Dual External GPU | 1 | I have purchased 2x MI50s that I plan to use for LLM. I have a 4U chassis that I think I can fit them in. However, I think I'd really prefer them to be external with Oculink so I can easily shift between machines. Does anybody know of an enclosure or board that allows 2x GPU to share a PSU, even if it still requires 2x... | 2025-08-10T15:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mmlv7s/dual_external_gpu/ | bdeetz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmlv7s | false | null | t3_1mmlv7s | /r/LocalLLaMA/comments/1mmlv7s/dual_external_gpu/ | false | false | self | 1 | null |
💡 I’ve got $20k in AWS credits – what would you build? (Thinking AI infra / OpenRouter alternative) | 0 | Hey folks,
I’m a solo founder with **$20,000 in AWS credits** burning a hole in my pocket, and I want to put them to good use building something people here would actually *want*.
One idea I’m exploring:
>
Why this idea?
* I’ve noticed a lot of devs struggle to track costs across multiple AI providers.
* Current o... | 2025-08-10T15:35:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mmllvp/ive_got_20k_in_aws_credits_what_would_you_build/ | Beginning_Phrase1621 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmllvp | false | null | t3_1mmllvp | /r/LocalLLaMA/comments/1mmllvp/ive_got_20k_in_aws_credits_what_would_you_build/ | false | false | self | 0 | null |
GPT 5 for Computer Use agents. | 0 | Same tasks, same grounding model we just swapped GPT 4o with GPT 5 as the thinking model.
Left = 4o, right = 5.
Watch GPT 5 pull away.
Grounding model: Salesforce GTA1-7B
Action space: CUA Cloud Instances (macOS/Linux/Windows)
The task is: "Navigate to {random_url} and play the game until you reach a score o... | 2025-08-10T15:31:11 | https://v.redd.it/dgodiizvn7if1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmli2x | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dgodiizvn7if1/DASHPlaylist.mpd?a=1757431888%2CYjc1YTk1YzI2MzU0Y2E1YjVhYWZiYmIzOTQxMTdjYTQ4MmUyMWJiNjk4NDE5OWI1ODU3ZmU4ZDlkNzgwMDliOQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/dgodiizvn7if1/DASH_1080.mp4?source=fallback', 'h... | t3_1mmli2x | /r/LocalLLaMA/comments/1mmli2x/gpt_5_for_computer_use_agents/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eWlwOGt3cXZuN2lmMffa9LUhs6wvp7jU6XPjtPFZB1S0k_8zNod6eLcZn2nM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWlwOGt3cXZuN2lmMffa9LUhs6wvp7jU6XPjtPFZB1S0k_8zNod6eLcZn2nM.png?width=108&crop=smart&format=pjpg&auto=webp&s=1ae423ce760f75c07a6dda4cf199c14ca266c... | |
Why is the GPU market so one-sided toward Nvidia? | 0 | This might be a basic question, but I don’t quite understand why the GPU space is so dominated by Nvidia, especially in AI/ML.
From what I see, Nvidia’s CUDA has been widely adopted for years, which explains a lot of the ecosystem lock-in. But PyTorch—the most important deep learning framework—now supports a variety o... | 2025-08-10T15:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mmlht7/why_is_the_gpu_market_so_onesided_toward_nvidia/ | QuirkyScarcity9375 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmlht7 | false | null | t3_1mmlht7 | /r/LocalLLaMA/comments/1mmlht7/why_is_the_gpu_market_so_onesided_toward_nvidia/ | false | false | self | 0 | null |
Pocketpal isn't helpful with choosing models | 0 | I am using pocketpal on my android phone. When adding a model from huggingface it doesn't let me sort by RAM needed or by date of release. Is there some way to only see the latest models that will fit in my RAM? | 2025-08-10T15:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mmlfze/pocketpal_isnt_helpful_with_choosing_models/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmlfze | false | null | t3_1mmlfze | /r/LocalLLaMA/comments/1mmlfze/pocketpal_isnt_helpful_with_choosing_models/ | false | false | self | 0 | null |
When LLMs don’t change stuff you don’t want changed | 0 | Perhaps other models have improved in this area and I need to look more carefully, but the recent GPT-OSS 120b model is pretty good editing a specific part of a writing without changing the rest of the text. Maybe it was a sampling choice on my part… what is your take on LLM models and their ability to return you entir... | 2025-08-10T15:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mml3lo/when_llms_dont_change_stuff_you_dont_want_changed/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mml3lo | false | null | t3_1mml3lo | /r/LocalLLaMA/comments/1mml3lo/when_llms_dont_change_stuff_you_dont_want_changed/ | false | false | self | 0 | null |
Why does lmarena currently show the ranking for GPT‑5 but not the rankings for the two GPT‑OSS models (20B and 120B)? | 15 | Aren’t there enough votes yet? I'd like to see how they perform. | 2025-08-10T15:02:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mmkt0x/why_does_lmarena_currently_show_the_ranking_for/ | iamn0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmkt0x | false | null | t3_1mmkt0x | /r/LocalLLaMA/comments/1mmkt0x/why_does_lmarena_currently_show_the_ranking_for/ | false | false | self | 15 | null |
What agent framework to use for a data analysis agent that uses python to analyse a CSV? SmolAgents? LangChain? Something else? Please give me your suggestions! | 0 | I want to build a local agent that can use python to analyse a csv file based on a hypothesis. I am proficient with python myself, and I want a simple, lightweight, easy-to-setup-and-tinker-with approach. What tools should I consider for that? | 2025-08-10T15:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mmkr80/what_agent_framework_to_use_for_a_data_analysis/ | AuspiciousApple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmkr80 | false | null | t3_1mmkr80 | /r/LocalLLaMA/comments/1mmkr80/what_agent_framework_to_use_for_a_data_analysis/ | false | false | self | 0 | null |
AI Dungeon Local AI Equivalent? | 10 | Is there any local AI equivalent to [AI ](https://aidungeon.com/)DUNGEON? AI Dungeon is one of the most addicting AI roleplaying experiences I've ever had, but it's fairly expensive, and when the context gets big, unless you're paying big bucks, AI starts to lose track over time. Is there any local AI on a similar leve... | 2025-08-10T14:47:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mmkg46/ai_dungeon_local_ai_equivalent/ | 1InterWebs1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmkg46 | false | null | t3_1mmkg46 | /r/LocalLLaMA/comments/1mmkg46/ai_dungeon_local_ai_equivalent/ | false | false | self | 10 | null |
How does Deepseek make money? Whats their business model | 129 | Sorry I've always wondered but looking it up online I only got vague non answers | 2025-08-10T14:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mmjtz3/how_does_deepseek_make_money_whats_their_business/ | lyceras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmjtz3 | false | null | t3_1mmjtz3 | /r/LocalLLaMA/comments/1mmjtz3/how_does_deepseek_make_money_whats_their_business/ | false | false | self | 129 | null |
Thinking vs Instruct models | 2 |
Thinking vs Instruct explained
Managers, they love thinking AIs.
They’re like, ‘Hey AI, can you help me brainstorm some big-picture strategies to maximize cross-functional synergy?’
And the AI’s like, ‘Sure! Here’s a 10-page PowerPoint outline, three pie charts, and a motivational quote from Steve Jobs.’
Meanwhi... | 2025-08-10T14:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mmjr3l/thinking_vs_instruct_models/ | Malfun_Eddie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmjr3l | false | null | t3_1mmjr3l | /r/LocalLLaMA/comments/1mmjr3l/thinking_vs_instruct_models/ | false | false | self | 2 | null |
Models for low-end PCs? | 1 | Are there any models that would run under this config
8gb ram, no gpu, i5 3rd gen. | 2025-08-10T14:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mmjfn7/models_for_lowend_pcs/ | Brilliant-Pool-3861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmjfn7 | false | null | t3_1mmjfn7 | /r/LocalLLaMA/comments/1mmjfn7/models_for_lowend_pcs/ | false | false | self | 1 | null |
Claude Code + claude-code-router + vLLM (Qwen3 Coder 30B) won’t execute tools / commands. looking for tips | 0 | **TL;DR:** I wired up **claude-code** with **claude-code-router (ccr)** and **vLLM** running **Qwen/Qwen3-Coder-30B-A3B-Instruct**. Chat works, but inside Claude Code it never *executes* anything (no tool calls), so it just says “Let me check files…” and stalls. Anyone got this combo working?
# Setup
**Host:** Linux ... | 2025-08-10T14:00:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mmjbte/claude_code_claudecoderouter_vllm_qwen3_coder_30b/ | s4lt3d_h4sh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmjbte | false | null | t3_1mmjbte | /r/LocalLLaMA/comments/1mmjbte/claude_code_claudecoderouter_vllm_qwen3_coder_30b/ | false | false | self | 0 | null |
OSINTBench: Can LLMs actually find your house? | 73 | I built a benchmark, [OSINTBench](https://osintbench.org/), to research whether LLMs can actually do the kind of precise geolocation and analysis work that OSINT researchers do daily.
[](https://preview.redd.it/osintbench-can-llms-actually-find-your-house-v0-61bp7o4912if1.png?width=1344&format=png&auto=webp&s=a41a0878... | 2025-08-10T13:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mmj8iv/osintbench_can_llms_actually_find_your_house/ | ccmdi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmj8iv | false | null | t3_1mmj8iv | /r/LocalLLaMA/comments/1mmj8iv/osintbench_can_llms_actually_find_your_house/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'AKVCPfRu-FcT6f_FGfpJIOJaeL-3Dv-qaHzajL3xFTY', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/AKVCPfRu-FcT6f_FGfpJIOJaeL-3Dv-qaHzajL3xFTY.png?width=108&crop=smart&auto=webp&s=90a1778d767a1fd42f3eeb8a1a7c020ab9a7f296', 'width': 108}, {'height': 151, 'url': 'h... | |
Billion page index | 0 | Yk like l am trying to make a quick mode like periplexity so I was researching on how it is made but there were many theories but then I came to conclusions that they use index yk index right the vector index so I want to make a index but on large scale So I thought where to get the data then I found a dataset named f... | 2025-08-10T13:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mmj2og/billion_page_index/ | ShoulderTough8758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmj2og | false | null | t3_1mmj2og | /r/LocalLLaMA/comments/1mmj2og/billion_page_index/ | false | false | self | 0 | null |
Cultural embedding in local models? Are they all US centric only? | 13 | I’m unfamiliar with how Chinese culture exists in the LLMs. It seems that scale.ai’s classifications and the way it lists and ranks information is still in all models that are based on American models.
All the answers I get seem very localized to my area (US). Even the Chinese ones like qwen and deepseek. I don’t kno... | 2025-08-10T13:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mmiudy/cultural_embedding_in_local_models_are_they_all/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmiudy | false | null | t3_1mmiudy | /r/LocalLLaMA/comments/1mmiudy/cultural_embedding_in_local_models_are_they_all/ | false | false | self | 13 | null |
How to run validation on multiple evaluation datasets simultaneously during Qwen2.5-VL-7B-Instruct fine-tuning? | 1 | [removed] | 2025-08-10T13:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mmiuch/how_to_run_validation_on_multiple_evaluation/ | CrazyWorth4876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmiuch | false | null | t3_1mmiuch | /r/LocalLLaMA/comments/1mmiuch/how_to_run_validation_on_multiple_evaluation/ | false | false | self | 1 | null |
Are you more interested in using local LLMs on a laptop or a home server? | 0 | Although the marketing often presents AI PCs as laptops, in practice, desktops or mini PCs offer better performance for running local AI models. Laptops are limited by heat and physical space, and you can still access your private AI via VPN when you're not at home. | 2025-08-10T13:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mmitu8/are_you_more_interested_in_using_local_llms_on_a/ | gnorrisan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmitu8 | false | null | t3_1mmitu8 | /r/LocalLLaMA/comments/1mmitu8/are_you_more_interested_in_using_local_llms_on_a/ | false | false | self | 0 | null |
It’s been awhile since DeepSeek released a SOTA model | 0 | 🤠 | 2025-08-10T13:37:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mmiskl/its_been_awhile_since_deepseek_released_a_sota/ | secopsml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmiskl | false | null | t3_1mmiskl | /r/LocalLLaMA/comments/1mmiskl/its_been_awhile_since_deepseek_released_a_sota/ | false | false | self | 0 | null |
GLM-4.5-Flash on z.ai website. Is this their upcoming announcement? | 220 | 2025-08-10T13:32:55 | Jawshoeadan | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1mmioub | false | null | t3_1mmioub | /r/LocalLLaMA/comments/1mmioub/glm45flash_on_zai_website_is_this_their_upcoming/ | false | false | default | 220 | {'enabled': True, 'images': [{'id': 'SN7M9mchkv9CXffJ79rhHnoX7FuJFy8UkokLfATf5x4', 'resolutions': [{'height': 185, 'url': 'https://external-preview.redd.it/SN7M9mchkv9CXffJ79rhHnoX7FuJFy8UkokLfATf5x4.jpeg?width=108&crop=smart&auto=webp&s=befd9f8fe3b95a0d183d2f6caf7e0032160d23bc', 'width': 108}, {'height': 370, 'url': '... | ||
is there table/graph about ppl/kld - size/bpw of ik_llama compare to traditional i-quants and k-quants? | 1 | I noticed ik\_llama improved tps, but what about ppl/kld bpw? | 2025-08-10T13:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mmih3x/is_there_tablegraph_about_pplkld_sizebpw_of_ik/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmih3x | false | null | t3_1mmih3x | /r/LocalLLaMA/comments/1mmih3x/is_there_tablegraph_about_pplkld_sizebpw_of_ik/ | false | false | self | 1 | null |
Qwen code as a main tool? | 2 | Considering the amazing work from Qwen team and their objective, I feel like even if I find any better tool, the best long term solution is « coworking » with Qwen team to improve their tool.
What is your opinion? | 2025-08-10T12:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mmhxcn/qwen_code_as_a_main_tool/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmhxcn | false | null | t3_1mmhxcn | /r/LocalLLaMA/comments/1mmhxcn/qwen_code_as_a_main_tool/ | false | false | self | 2 | null |
Qwen and DeepSeek is great for coding but | 28 | Has anyone ever noticed how it takes it upon itself (sometimes) to change shit around on the frontend to make it the way it wants without your permission??
It’s not even little insignificant things it’s major changes.
Not only that but with Qwen3 coder especially I tell it instructions with how to format its respons... | 2025-08-10T12:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mmh7nq/qwen_and_deepseek_is_great_for_coding_but/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmh7nq | false | null | t3_1mmh7nq | /r/LocalLLaMA/comments/1mmh7nq/qwen_and_deepseek_is_great_for_coding_but/ | false | false | self | 28 | null |
Librechat vs openwebui | 0 | ## TLDR: People with experience configuring and using both, do you prefer one over the other?
I’ve been using librechat for a few weeks as my frontend for lmstudio and openrouter. It lets me access all the paid and local models in one UI. It has some quirks, but works okay. I can switch models mid-conversation, branch... | 2025-08-10T12:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mmh6k8/librechat_vs_openwebui/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmh6k8 | false | null | t3_1mmh6k8 | /r/LocalLLaMA/comments/1mmh6k8/librechat_vs_openwebui/ | false | false | self | 0 | null |
now we have the best open source model that we can use at human level , and all this possible bcz of the chinese model , we have best image generation model ( qwen , seeddream) , video generation ( wan ) , coding model ( qwen 3 ) , coding terminal model ( qwen 3) , overall best model ( deepseek v3) | 357 | open source in coding has like 2 month gap and in image generation model they have like the 1 year gap but now that gap doesnt matter , video generation model is good .
so from all side chinese people did a great job | 2025-08-10T12:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mmh4tv/now_we_have_the_best_open_source_model_that_we/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmh4tv | false | null | t3_1mmh4tv | /r/LocalLLaMA/comments/1mmh4tv/now_we_have_the_best_open_source_model_that_we/ | false | false | self | 357 | null |
Prompt processing slow if model does not fit in VRAM. Is it OK? | 1 | [removed] | 2025-08-10T12:17:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mmh34h/prompt_processing_slow_if_model_does_not_fit_in/ | Galliot_astrophoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmh34h | false | null | t3_1mmh34h | /r/LocalLLaMA/comments/1mmh34h/prompt_processing_slow_if_model_does_not_fit_in/ | false | false | self | 1 | null |
what's the best way to finetune qwen-30b-a3b? | 1 | do i just use the unsloth notebook below and change the model to unsloth/Qwen3-30B-A3B-Instruct-2507 then edit the dataset node to match my chatgpt style jsonl dataset?
[https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3\_(14B)-Reasoning-Conversational.ipynb](https://colab.research.goog... | 2025-08-10T12:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mmgz9a/whats_the_best_way_to_finetune_qwen30ba3b/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmgz9a | false | null | t3_1mmgz9a | /r/LocalLLaMA/comments/1mmgz9a/whats_the_best_way_to_finetune_qwen30ba3b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': '... |
Tesla T4s a legit way to run local models? | 2 | Given the price of the Tesla T4 ($650 for 16 GB), it seems like i could get up to 48GB of VRAM by networking 3 of them together for under 2k. Is this a legit way to run a local model or am I missing some pitfalls? | 2025-08-10T12:10:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mmgxq7/tesla_t4s_a_legit_way_to_run_local_models/ | Zealousideal-Mix5974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmgxq7 | false | null | t3_1mmgxq7 | /r/LocalLLaMA/comments/1mmgxq7/tesla_t4s_a_legit_way_to_run_local_models/ | false | false | self | 2 | null |
Surprised by GPT-5 with reasoning level "minimal" for UI generation | 41 | It's been in the top 5 since showing up on [DesignArena.ai](http://DesignArena.ai), despite the reasoning level set to "minimal" in the system prompt. I wonder how it would perform at the highest reasoning level, better than Opus 4.1 (maybe /u/[Accomplished-Copy332](https://www.reddit.com/user/Accomplished-Copy332/) kn... | 2025-08-10T11:58:29 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmgoxe | false | null | t3_1mmgoxe | /r/LocalLLaMA/comments/1mmgoxe/surprised_by_gpt5_with_reasoning_level_minimal/ | false | false | 41 | {'enabled': True, 'images': [{'id': 'FfLOEtc6c6C9rHDtMWRHUcLhxIBm_OznjdHzkO7aQWk', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/k8gflix4l6if1.png?width=108&crop=smart&auto=webp&s=95962ff30c01c5c5f2324f4c9dd023d8d077ae7b', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/k8gflix4l6if1.png?... | ||
Anyone experienced with self hosting at enterprise level: how do you handle KV caching? | 25 | I'm setting up a platform where I intend to self host models. Starting off with serverless runpod GPUs for now (what I can afford).
So I came to the realisation that one of the core variables for keeping costs down will be KV caching. My platform will be 100% around multi turn conversations with long contexts. In prin... | 2025-08-10T11:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mmgm19/anyone_experienced_with_self_hosting_at/ | Budget_Map_3333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmgm19 | false | null | t3_1mmgm19 | /r/LocalLLaMA/comments/1mmgm19/anyone_experienced_with_self_hosting_at/ | false | false | self | 25 | null |
How to run RP longer than 40 messages? | 1 | Hello guys, I've been using Oobabooga for quite some time, and now I moved to Kobold + silly for my RP.
I have to cut all the threads I make after a very few thousand tokens (around 5000-6000) because I mainly get 2 issues:
the answers loop, and because the different parts of the story either are forgotten, or are tal... | 2025-08-10T11:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mmgit3/how_to_run_rp_longer_than_40_messages/ | Tomorrow_Previous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmgit3 | false | null | t3_1mmgit3 | /r/LocalLLaMA/comments/1mmgit3/how_to_run_rp_longer_than_40_messages/ | false | false | self | 1 | null |
i got tired of “ai agents” that are just fancy chatbots — so i built a local-first runtime where they actually do stuff (ollama ready) | 0 | built it because i was sick of:
• yaml hell
• chatbot wrappers
• “autonomous” agents that nap all day
now my agents (sentinels) wake themselves, think, decide, and actually do stuff — all local-first with ollama.
demo sentinel: finds trending crypto topics, writes seo articles, builds a static site, pushes it — on lo... | 2025-08-10T11:47:51 | https://github.com/iluxu/llmbasedos | iluxu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mmgi3b | false | null | t3_1mmgi3b | /r/LocalLLaMA/comments/1mmgi3b/i_got_tired_of_ai_agents_that_are_just_fancy/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'NAN5p__Jo5ZvAhkTM1_OmbW2Thc2d4tFrSODG8cOseY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NAN5p__Jo5ZvAhkTM1_OmbW2Thc2d4tFrSODG8cOseY.png?width=108&crop=smart&auto=webp&s=68ed5de7e3a3afc070760a7ccbb8f403533c0391', 'width': 108}, {'height': 108, 'url': 'h... |
Go home ChatGPT, I’m drunk | 11 | This proves that ChatGPT 5 is ve | 2025-08-10T11:34:54 | ExplorerWhole5697 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmg9hx | false | null | t3_1mmg9hx | /r/LocalLLaMA/comments/1mmg9hx/go_home_chatgpt_im_drunk/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'zjya22rqh6if1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zjya22rqh6if1.jpeg?width=108&crop=smart&auto=webp&s=f5da324f1234e147ebbc6d0459436b307a3cb7db', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zjya22rqh6if1.jpeg?width=216&crop=smart&auto=... | |
GLM 4.5 355b (IQ3_XXS) is amazing at creative writing. | 79 | With 128gb RAM and 16gb VRAM (144gb total RAM) this quant runs pretty well with low context and a little bit of hard drive offloading with `mmap`, only resulting in occasional *brief* hiccups. Getting \~3 t/s with 4k context, and \~2.4 t/s with 8k context and `Flash Attention`.
Even at this relatively low quant, the m... | 2025-08-10T11:33:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mmg8uj/glm_45_355b_iq3_xxs_is_amazing_at_creative_writing/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmg8uj | false | null | t3_1mmg8uj | /r/LocalLLaMA/comments/1mmg8uj/glm_45_355b_iq3_xxs_is_amazing_at_creative_writing/ | false | false | self | 79 | null |
[llama.cpp] Prompt processing painfully slow if model does not fit in VRAM. Generation is fine. Is it OK? | 1 | [removed] | 2025-08-10T10:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mmfk77/llamacpp_prompt_processing_painfully_slow_if/ | Galliot_astrophoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmfk77 | false | null | t3_1mmfk77 | /r/LocalLLaMA/comments/1mmfk77/llamacpp_prompt_processing_painfully_slow_if/ | false | false | self | 1 | null |
llama.cpp prompt processing painfully slow on i7-14700K + RTX-5090+ RTX 4090 when model does not fit in VRAM. Generation is fine. Is it Ok? | 1 | [removed] | 2025-08-10T10:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mmfdpw/llamacpp_prompt_processing_painfully_slow_on/ | Galliot_astrophoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmfdpw | false | null | t3_1mmfdpw | /r/LocalLLaMA/comments/1mmfdpw/llamacpp_prompt_processing_painfully_slow_on/ | false | false | self | 1 | null |
Reasoning LLMs Explorer | 1 | Here is a webpage where a lot of information is compiled about Reasoning in LLMs (A tree of surveys, an atlas of definitions and a map of techniques in reasoning)
[https://azzedde.github.io/reasoning-explorer/](https://azzedde.github.io/reasoning-explorer/)
Your insights ? | 2025-08-10T10:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mmf46g/reasoning_llms_explorer/ | Boring_Rabbit2275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmf46g | false | null | t3_1mmf46g | /r/LocalLLaMA/comments/1mmf46g/reasoning_llms_explorer/ | false | false | self | 1 | null |
Help to identify a dashboard | 1 | Does anyone know what software this dashboard is?
It's from AI Explained's youtube channel [https://www.youtube.com/watch?v=WLdBimUS1IE&t=326s](https://www.youtube.com/watch?v=WLdBimUS1IE&t=326s)
https://preview.redd.it/y0c4mv4c56if1.png?width=886&format=png&auto=webp&s=2f738599192bfbbe68f8b4f6947afde9dbdc33e0
| 2025-08-10T10:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mmf31v/help_to_identify_a_dashboard/ | jeswitty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmf31v | false | {'oembed': {'author_name': 'AI Explained', 'author_url': 'https://www.youtube.com/@aiexplained-official', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/WLdBimUS1IE?start=326&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encry... | t3_1mmf31v | /r/LocalLLaMA/comments/1mmf31v/help_to_identify_a_dashboard/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'udmGfcYqLHVPkUEANkcDOrXB7pdK7gsbjZnCCh6GrMI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/udmGfcYqLHVPkUEANkcDOrXB7pdK7gsbjZnCCh6GrMI.jpeg?width=108&crop=smart&auto=webp&s=5f003028cc0eaad6bed443c9937b27b8588872c5', 'width': 108}, {'height': 162, 'url': '... | |
Why are Diffusion-Encoder LLMs not more popular? | 148 | Autoregressive inference will *always* have a non-zero chance of hallucination. It’s baked into the probabilistic framework, and we probably waste a decent chunk of parameter space just trying to minimise it.
Decoder-style LLMs have an inherent trade-off across early/middle/late tokens:
* Early tokens = not enough co... | 2025-08-10T09:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mmen16/why_are_diffusionencoder_llms_not_more_popular/ | AcanthocephalaNo8273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmen16 | false | null | t3_1mmen16 | /r/LocalLLaMA/comments/1mmen16/why_are_diffusionencoder_llms_not_more_popular/ | false | false | self | 148 | null |
gpt 4 users are like toddlers who can't let go of their pacifier | 0 | 2025-08-10T09:22:41 | seppe0815 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mme2f3 | false | null | t3_1mme2f3 | /r/LocalLLaMA/comments/1mme2f3/gpt_4_users_are_like_toddlers_who_cant_let_go_of/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'AystkuNFFXU0e7lhc71EC1F1W5Ie4zOHUKzJatP_-VM', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/b5stp6f2u5if1.png?width=108&crop=smart&auto=webp&s=a0bebc10bc312d56a3f966f0bf7a113226e57289', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/b5stp6f2u5if1.pn... | |||
Speakr v0.5.0 is out! A self-hosted tool to put your local LLMs to work on audio with custom, stackable summary prompts. | 189 | Hey r/LocalLLaMA!
I've just released a big update for **Speakr**, my open-source tool for transcribing audio and using your local LLMs to create intelligent summaries. This version is all about giving you more control over how your models process your audio data.
You can use speakr to record notes on your phone or co... | 2025-08-10T09:06:37 | hedonihilistic | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmdtox | false | null | t3_1mmdtox | /r/LocalLLaMA/comments/1mmdtox/speakr_v050_is_out_a_selfhosted_tool_to_put_your/ | false | false | 189 | {'enabled': True, 'images': [{'id': 'SsH6C7WrLGoNkfv7g6RX7XiNsfBm0ujExdjC24gc0_Q', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/5uhq9gouq5if1.png?width=108&crop=smart&auto=webp&s=de4757e9a9f425f8090e717d8892a55d08f020df', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/5uhq9gouq5if1.png... | ||
Can Anyone Tell Me What's Going On Here? | 0 | I use LM Studio to run my local models. Recently downloaded qwen3-30b-a3b-2507 because I heard about how great it is. But after I downloaded it for some reason I can't load the model. In the "Downloads" section there was no "Load Model" button and even in the "Select a model to load" section it was not available for lo... | 2025-08-10T08:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mmdpma/can_anyone_tell_me_whats_going_on_here/ | OneOnOne6211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmdpma | false | null | t3_1mmdpma | /r/LocalLLaMA/comments/1mmdpma/can_anyone_tell_me_whats_going_on_here/ | false | false | self | 0 | null |
Uncensored rp models | 16 | Are there any good newer models that i can use for uncensored rp? Will gemma-3n-4b-abliterated work? Is it better than qwen3-4b-abliterated? Is there any newer models that were learning with nsfw material and made for uncensored rp? Preferably models with 8 billion parameters or lower. My pc: gtx 1660 super (6gb vram),... | 2025-08-10T08:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mmdft7/uncensored_rp_models/ | Imaginary_Bread9711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmdft7 | false | null | t3_1mmdft7 | /r/LocalLLaMA/comments/1mmdft7/uncensored_rp_models/ | false | false | self | 16 | null |
RTX 6000 Pro build questions... | 5 | Ho folks!
I'm relatively new to running LLM's and trying to wrap my head around everything. My understanding is that if I can fit the model into a single GPU's VRAM, then 1 GPU is better than multiple GPUs, which is better than overflowing from GPU VRAM into system RAM. Please correct me if I'm wrong here!
My goal is... | 2025-08-10T08:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mmdenu/rtx_6000_pro_build_questions/ | Fenix04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmdenu | false | null | t3_1mmdenu | /r/LocalLLaMA/comments/1mmdenu/rtx_6000_pro_build_questions/ | false | false | self | 5 | null |
qwen-image o flux1 in lmstudio for windows? | 3 | is it possible?? if so how? link in huggingface.com please. thanks | 2025-08-10T08:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mmd2ap/qwenimage_o_flux1_in_lmstudio_for_windows/ | Bobcotelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmd2ap | false | null | t3_1mmd2ap | /r/LocalLLaMA/comments/1mmd2ap/qwenimage_o_flux1_in_lmstudio_for_windows/ | false | false | self | 3 | null |
Gemma-3n 2b Rough Benchmarks on Mid-Range Android Device (8gb RAM) | 12 | Gemma-3n 2b Rough Benchmarks, for one's looking to run/build local/offline AI/Llm apps.
These are not rigorous, but to give an rough idea about how well they perform, they have scope to offer better throughput. (via engg, optimizations).
IOS devices have few times better throughput, so larger 4b model variant are als... | 2025-08-10T08:08:10 | ditpoo94 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mmcxug | false | null | t3_1mmcxug | /r/LocalLLaMA/comments/1mmcxug/gemma3n_2b_rough_benchmarks_on_midrange_android/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'cnOv2PRp39k0ziQ0lM3dkUmyRHfGHRtuzilvvLc-IZc', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/mcxrhxe8g5if1.png?width=108&crop=smart&auto=webp&s=df29dadc9f25ef096df2bc536a781c74161bf4dd', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/mcxrhxe8g5if1.png... | ||
Intern-S1 GGUF where? | 11 | llama.cpp has been merged two days ago. | 2025-08-10T08:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mmcvnu/interns1_gguf_where/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmcvnu | false | null | t3_1mmcvnu | /r/LocalLLaMA/comments/1mmcvnu/interns1_gguf_where/ | false | false | self | 11 | null |
ReAct & reasoning models | 3 | Hello,
I was asking myself if it made sense using AI Agent frameworks such as smolagents which natively set a ReAct logic to your main agent if behind the scene the LLM you decide to use is already a reasoning LLM (such as gpt-oss-20B for example) ? As ReAct is already making your agent reason, maybe too much reason... | 2025-08-10T07:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mmcknq/react_reasoning_models/ | Alternative-Run6265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmcknq | false | null | t3_1mmcknq | /r/LocalLLaMA/comments/1mmcknq/react_reasoning_models/ | false | false | self | 3 | null |
OpenAI purposely put their OSS model on quant4 | 0 | They hid under the guise that it’s a quant 4 model so that anyone with an H100 could run it efficiently and it’s amazing. The duality of it all is, for being that quantized it is a phenomenal model. They also know though if it was probably the original quant 16 or 8 it would be too close to GPT5 and so they purposefull... | 2025-08-10T07:43:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mmcka0/openai_purposely_put_their_oss_model_on_quant4/ | No-Fig-8614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmcka0 | false | null | t3_1mmcka0 | /r/LocalLLaMA/comments/1mmcka0/openai_purposely_put_their_oss_model_on_quant4/ | false | false | self | 0 | null |
New Nemo finetune: Impish_Nemo | 83 | Hi all,
New creative model with some sass, very large dataset used, super fun for adventure & creative writing, while also being a strong assistant.
Here's the TL;DR, for details check the model card:
* My **best model yet!** Lots of **sovl!**
* **Smart, sassy, creative, and unhinged** — without the brain damage.
... | 2025-08-10T07:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mmcg87/new_nemo_finetune_impish_nemo/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmcg87 | false | null | t3_1mmcg87 | /r/LocalLLaMA/comments/1mmcg87/new_nemo_finetune_impish_nemo/ | false | false | self | 83 | {'enabled': False, 'images': [{'id': '51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=108&crop=smart&auto=webp&s=5bb85cf25fd314ab613856c46b8fce17d683ab63', 'width': 108}, {'height': 116, 'url': 'h... |
Llama-cpp-python not using GPU | 0 | I have installed llama-cpp-python but it is not using GPU only running on CPU windows 11 64 Bit NVIDIA 3050. I have reinstalled NVIDIA CUDA and Driver, llama-cpp-python it is still the same issue.
Please suggest. i am stuck on this issue for couple of days | 2025-08-10T07:36:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mmcg1z/llamacpppython_not_using_gpu/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmcg1z | false | null | t3_1mmcg1z | /r/LocalLLaMA/comments/1mmcg1z/llamacpppython_not_using_gpu/ | false | false | self | 0 | null |
Anyone here with an AMD AI Max+ 395 + 128GB setup running coding agents? | 28 | For those of you who happen to own an AMD AI Max+ 395 machine with 128GB of RAM, have you tried running models with coding agents like Cline, Aider, or similar tools? | 2025-08-10T07:32:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mmce2h/anyone_here_with_an_amd_ai_max_395_128gb_setup/ | Admirable_Reality281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmce2h | false | null | t3_1mmce2h | /r/LocalLLaMA/comments/1mmce2h/anyone_here_with_an_amd_ai_max_395_128gb_setup/ | false | false | self | 28 | null |
Local LLM for creative writing. | 7 | **Background:**
Hi, I am not a writer and English is not my first language. I like reading novels and dreamed of writing one but never had the confidence. When ChatGPT came into picture I tried to write a small draft and made ChatGPT fix it. The cleaned up output was not the best but far better than what I could have ... | 2025-08-10T07:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mmc9fb/local_llm_for_creative_writing/ | Zero_Ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmc9fb | false | null | t3_1mmc9fb | /r/LocalLLaMA/comments/1mmc9fb/local_llm_for_creative_writing/ | false | false | self | 7 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.