title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LLama.cpp performance on ROCm | 8 | 2025-08-01T20:17:30 | https://github.com/ggml-org/llama.cpp/discussions/15021 | COBECT | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mf72g8 | false | null | t3_1mf72g8 | /r/LocalLLaMA/comments/1mf72g8/llamacpp_performance_on_rocm/ | false | false | default | 8 | null | |
Text to speech on AMD? | 1 | Hello, I was wondering if there were any text to speech models that I could use on an amd graphics card. I saw a post with a bunch of links to good tts models but I don't want to manually try all of them if they're not gonna work on my system. | 2025-08-01T20:06:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mf6s40/text_to_speech_on_amd/ | Shady_Shin009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf6s40 | false | null | t3_1mf6s40 | /r/LocalLLaMA/comments/1mf6s40/text_to_speech_on_amd/ | false | false | self | 1 | null |
https://www.eovaldiartscience.com/blog/2025/8/elon-mouse-and-mark-zuckerhamsters-day-out | 1 | [removed] | 2025-08-01T20:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mf6qc1/httpswwweovaldiartsciencecomblog20258elonmouseandm/ | BenjaminVanLocke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf6qc1 | false | null | t3_1mf6qc1 | /r/LocalLLaMA/comments/1mf6qc1/httpswwweovaldiartsciencecomblog20258elonmouseandm/ | false | false | self | 1 | null |
Any up to date coding benchmarks? | 3 | Google delivers ancient benchmarks, I used to love aider benchmarks, but it seems it was abandoned, no updates on new models. I want to know how qwen3-coder and glm4.5 compare.. but nobody updates benchmarks anymore? are we in a postbenchmark era? Benchmarks as gamed as they are they still signal utility! | 2025-08-01T20:00:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mf6n4u/any_up_to_date_coding_benchmarks/ | Sudden-Lingonberry-8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf6n4u | false | null | t3_1mf6n4u | /r/LocalLLaMA/comments/1mf6n4u/any_up_to_date_coding_benchmarks/ | false | false | self | 3 | null |
Llama-4-Scout-17B-16E-Instruct-GGUF:Q4_K_S running at 20 tk/s on Ryzen AI Max + 395 with llama.cpp Vulkan + Lemonade server (60GB GPU memory) | 15 | Just wanted to share my results running Llama-4-Scout-17B-16E-Instruct-GGUF:Q4\_K\_S on my Ryzen AI Max + 395 using llama.cpp with Vulkan backend and the Lemonade server. I’m getting a solid 20 tokens/second with 60 GB of GPU memory in use. | 2025-08-01T19:52:59 | https://v.redd.it/zf13w9taqggf1 | ShamanFlamingoFR | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf6gaa | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zf13w9taqggf1/DASHPlaylist.mpd?a=1756669995%2CYTc2MjhhY2NiMjYyOWJkNjliZjE5MWQxNTRlYmYyMjM4ZTA0ZmNkZDc3MWQ0MGQ4Zjc2NTlmMmZmOGE2NWYwYw%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/zf13w9taqggf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mf6gaa | /r/LocalLLaMA/comments/1mf6gaa/llama4scout17b16einstructggufq4_k_s_running_at_20/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'anl1eG84dGFxZ2dmMUED0vbVDpHB_6J3h9pq2feZQo01Xw2lEninALLqCef8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/anl1eG84dGFxZ2dmMUED0vbVDpHB_6J3h9pq2feZQo01Xw2lEninALLqCef8.png?width=108&crop=smart&format=pjpg&auto=webp&s=e9150f6d4ace971cda6add553081a915ea75a... | |
Llama-4-Scout-17B-16E-Instruct-GGUF:Q4_K_S running at 20 tk/s on Ryzen AI Max + 395 with llama.cpp Vulkan + Lemonade server (60GB GPU memory) | 1 | 2025-08-01T19:51:34 | https://v.redd.it/mnblr8dxpggf1 | ShamanFlamingoFR | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf6f0n | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mnblr8dxpggf1/DASHPlaylist.mpd?a=1756669911%2CODE0NGI2ODBiMmE0M2NmMWZiY2JjOThkZDk0NGM4ZDI0NDQzNWY5OGE5NWRhZGVkNWUyYjJhZjg2MzBlNTc5YQ%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/mnblr8dxpggf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mf6f0n | /r/LocalLLaMA/comments/1mf6f0n/llama4scout17b16einstructggufq4_k_s_running_at_20/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eDRxbGhyY3hwZ2dmMUED0vbVDpHB_6J3h9pq2feZQo01Xw2lEninALLqCef8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eDRxbGhyY3hwZ2dmMUED0vbVDpHB_6J3h9pq2feZQo01Xw2lEninALLqCef8.png?width=108&crop=smart&format=pjpg&auto=webp&s=885cdfae812cc64b6c3736d4fcf468d8ab539... | ||
Qwen3-Embedding-0.6B is fast, high quality, and supports up to 32k tokens. Beats OpenAI embeddings on MTEB | 301 | [https://huggingface.co/Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B)
I switched over today. Initially the results seemed poor, but it turns out there was an issue when using Text embedding inference 1.7.2 related to pad tokens. Fixed in 1.7.3 . Depending on what inference tooling you ... | 2025-08-01T19:47:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mf6bkl/qwen3embedding06b_is_fast_high_quality_and/ | No_Edge2098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf6bkl | false | null | t3_1mf6bkl | /r/LocalLLaMA/comments/1mf6bkl/qwen3embedding06b_is_fast_high_quality_and/ | false | false | self | 301 | null |
I discovered The Semantic Manifold Theory with big implications for many areas of math, science, engineering, and computing. All remaining Millennium Prize Problems solved. Directly relevant to all LLMs. Please help spread the word. | 0 | Hello! My name is Thomas P. Conway. Im an independent researcher and systems engineer. I discovered a mathematically complete theoretical framework that I believe could be pretty important for many areas of math and science. However, I am a completely unknown outsider to academia, so I can't just publish my work on ArX... | 2025-08-01T19:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mf5wqp/i_discovered_the_semantic_manifold_theory_with/ | Thomas27c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf5wqp | false | null | t3_1mf5wqp | /r/LocalLLaMA/comments/1mf5wqp/i_discovered_the_semantic_manifold_theory_with/ | false | false | self | 0 | null |
Need help debugging: llama-server uses GPU Memory but 0% GPU Util for inference (CPU only) | 0 | I'm running into a performance issue with a self-hosted agent and could use some help. I've successfully set up an agent system, but the inference is extremely slow because it's only using the CPU.
**My Setup:**
* **Model:** Qwen3-Coder-480B-A35B-Instruct-GGUF (Q8\_0 quant from unsloth)
* **Hardware:** RunPod with RT... | 2025-08-01T19:04:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mf581n/need_help_debugging_llamaserver_uses_gpu_memory/ | Rezvord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf581n | false | null | t3_1mf581n | /r/LocalLLaMA/comments/1mf581n/need_help_debugging_llamaserver_uses_gpu_memory/ | false | false | self | 0 | null |
Automated Testing Framework for Voice AI Agents : Technical Webinar & Demo | 1 | Hey folks, If you're building voice (or chat) AI agents, you might find this interesting. 90% of voice AI systems fail in production, not due to bad tech but inadequate testing methods. There is an interesting webinar coming up on luma, that will show you the ultimate evaluation framework you need to know to ship Voic... | 2025-08-01T18:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mf4zaz/automated_testing_framework_for_voice_ai_agents/ | Any_Upstairs_5546 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf4zaz | false | null | t3_1mf4zaz | /r/LocalLLaMA/comments/1mf4zaz/automated_testing_framework_for_voice_ai_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pXXfVQhxWBHbPqaJJorVTOhVX1OM6M88YALJrFHyfqA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/pXXfVQhxWBHbPqaJJorVTOhVX1OM6M88YALJrFHyfqA.jpeg?width=108&crop=smart&auto=webp&s=257075b1f05babe2a49cb0efcfa7e40789c256bd', 'width': 108}, {'height': 113, 'url': '... |
best local and free REALTIME STT | 1 | i am using STT in a JARVIS project which i want to be full local and free and my system specs- 512GB SSD and 8GB RAM and 512MB VRAM and processor of AMD 5500U i have tried faster-whisper but its tiny model is fast but not accurate and also heard of whisper.cpp but i haven't tried as its setup is a bit complex and al... | 2025-08-01T18:51:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mf4vy7/best_local_and_free_realtime_stt/ | Ok_Development_2603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf4vy7 | false | null | t3_1mf4vy7 | /r/LocalLLaMA/comments/1mf4vy7/best_local_and_free_realtime_stt/ | false | false | self | 1 | null |
Me lately... Anyone else can relate? 😎 | 56 | Disclaimer:
No actual plushy pandas were hurt in the process of trying and failing to fit in a plastic box... | 2025-08-01T18:37:31 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf4ihq | false | null | t3_1mf4ihq | /r/LocalLLaMA/comments/1mf4ihq/me_lately_anyone_else_can_relate/ | false | false | default | 56 | {'enabled': True, 'images': [{'id': 'rqzixk49cggf1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/rqzixk49cggf1.gif?width=108&crop=smart&format=png8&s=4f274e8ff485a4989b9d127943de26befcdfb05d', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/rqzixk49cggf1.gif?width=216&crop=smart&format... | |
Are you interested in building something cool? | 0 | If you’re interested in agentic ai and hate paying for api keys we can build something cool together.
DM or comment and we can talk and see where it goes!! | 2025-08-01T18:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mf44kh/are_you_interested_in_building_something_cool/ | Haunting_Stomach8967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf44kh | false | null | t3_1mf44kh | /r/LocalLLaMA/comments/1mf44kh/are_you_interested_in_building_something_cool/ | false | false | self | 0 | null |
How do you speed up llama.cpp on macOS? | 0 | I’m running llama.cpp on a Mac (Apple Silicon), and it works well out of the box, but I’m wondering what others are doing to make it faster. Are there specific flags, build options, or runtime tweaks that helped you get better performance? Would love to hear what’s worked for you.
I'm using it with Gemma3 4b for dicta... | 2025-08-01T18:17:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mf3z9k/how_do_you_speed_up_llamacpp_on_macos/ | discoveringnature12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf3z9k | false | null | t3_1mf3z9k | /r/LocalLLaMA/comments/1mf3z9k/how_do_you_speed_up_llamacpp_on_macos/ | false | false | self | 0 | null |
Best way to run the Qwen3 30b A3B coder/instruct models for HIGH throughput and/or HIGH context? (on a single 4090) | 15 | Looking for some "best practices" for this new 30B A3B to squeeze the most out of it with my 4090. Normally I'm pretty up to date on this stuff but I'm a month or so behind the times. I'll share where I'm at and hopefully somebody's got some suggestions :).
I'm sitting on 64gb ram/24gb vram (4090). I'm open to running... | 2025-08-01T18:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mf3wr0/best_way_to_run_the_qwen3_30b_a3b_coderinstruct/ | teachersecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf3wr0 | false | null | t3_1mf3wr0 | /r/LocalLLaMA/comments/1mf3wr0/best_way_to_run_the_qwen3_30b_a3b_coderinstruct/ | false | false | self | 15 | null |
The “Leaked” 120 B OpenAI Model is not Trained in FP4 | 404 | The "Leaked" 120B OpenAI Model Is Trained In FP4
| 2025-08-01T18:11:35 | badbutt21 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf3tm9 | false | null | t3_1mf3tm9 | /r/LocalLLaMA/comments/1mf3tm9/the_leaked_120_b_openai_model_is_not_trained_in/ | false | false | default | 404 | {'enabled': True, 'images': [{'id': 'g1yk8r6b8ggf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/g1yk8r6b8ggf1.jpeg?width=108&crop=smart&auto=webp&s=926137a58fce6f1ef8bee443ff019ae18b337863', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/g1yk8r6b8ggf1.jpeg?width=216&crop=smart&auto=w... | |
I Generated 1 Billion Tokens (So You Don't Have To): Introducing ReasonScape | 149 | Ever spent weeks building the perfect LLM benchmark only to watch it crumble within a few months?
Clean problems, elegant difficulty curves, proper statistical controls. New model drops. Perfect scores across the board. Your tests got trained on. Weeks of work, completely worthless.
So you pivot. Make the tests harde... | 2025-08-01T18:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mf3nw4/i_generated_1_billion_tokens_so_you_dont_have_to/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf3nw4 | false | null | t3_1mf3nw4 | /r/LocalLLaMA/comments/1mf3nw4/i_generated_1_billion_tokens_so_you_dont_have_to/ | false | false | 149 | null | |
Built a web dashboard to manage multiple llama-server instances - llamactl | 7 | I've been running multiple llama-server instances for different models and found myself constantly SSH-ing into servers to start, stop, and monitor them. After doing this dance one too many times, I decided to build a proper solution.
[llamactl](https://github.com/lordmathis/llamactl) is a control server that lets you... | 2025-08-01T18:03:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mf3mhi/built_a_web_dashboard_to_manage_multiple/ | RealLordMathis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf3mhi | false | null | t3_1mf3mhi | /r/LocalLLaMA/comments/1mf3mhi/built_a_web_dashboard_to_manage_multiple/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'kzOzGVzXfbNgBaONm1V9rEJeMaWG7hKBmJO8I7ak4y4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kzOzGVzXfbNgBaONm1V9rEJeMaWG7hKBmJO8I7ak4y4.png?width=108&crop=smart&auto=webp&s=42bce2f9dd0a4ea31a5251bd4a2a838f62791353', 'width': 108}, {'height': 108, 'url': 'h... |
Why on open router using Horizon Alpha refuse to work until I pay for credits? | 1 | Horizon Alpha 's output and input on open router cost 0$ so why After few queries it refuses to work until I pay for more credits? | 2025-08-01T17:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mf3abn/why_on_open_router_using_horizon_alpha_refuse_to/ | Equivalent-Word-7691 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf3abn | false | null | t3_1mf3abn | /r/LocalLLaMA/comments/1mf3abn/why_on_open_router_using_horizon_alpha_refuse_to/ | false | false | self | 1 | null |
Qwen3 Coder 480B is Live on Cerebras ($2 per million output and 2000 output t/s!!!) | 392 | We finally have a legitimate open-source competitor to sonnet for coding. Even if the model is 5-10% worse, being about 20 times faster and 7.5 times cheaper will lead to a lot of adoption (Hosted in US datacenters too)
Also launched new coding plans that are insanely valuable:
* Cerebras Code Pro: 50 USD / month fo... | 2025-08-01T17:50:21 | https://www.cerebras.ai/blog/qwen3-coder-480b-is-live-on-cerebras | Longjumping-Solid563 | cerebras.ai | 1970-01-01T00:00:00 | 0 | {} | 1mf399p | false | null | t3_1mf399p | /r/LocalLLaMA/comments/1mf399p/qwen3_coder_480b_is_live_on_cerebras_2_per/ | false | false | default | 392 | null |
Is Qwen still the best for coding? | 7 | Hello, I've been reading the subreddit for some days now and I was wondering if Qwen 3 or Qwen 2.5 code was still the best model to run to run on vscode with either AI toolkit or RooCode?
I got a M4 pro with 14-Core CPU, 20-Core GPU, 24GB Unified Memory and about 50gb of storage left, can free up another 50gb if neede... | 2025-08-01T17:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mf2cu1/is_qwen_still_the_best_for_coding/ | OTBKR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf2cu1 | false | null | t3_1mf2cu1 | /r/LocalLLaMA/comments/1mf2cu1/is_qwen_still_the_best_for_coding/ | false | false | self | 7 | null |
How much do PCIe Lanes matter? | 5 | Hi guys!
How much do PCIe Lanes really matter?
As far as i understand, just for running a LLM, with for example ollama, they are only really needed when the model is loaded into VRAM - after that everything is done on the card itself.
So basically, if using multiple gpus, its enough with they are connected via PCIe ... | 2025-08-01T16:47:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mf1lfv/how_much_do_pcie_lanes_matter/ | MrCatberry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf1lfv | false | null | t3_1mf1lfv | /r/LocalLLaMA/comments/1mf1lfv/how_much_do_pcie_lanes_matter/ | false | false | self | 5 | null |
How much VRAM does MOE models take comparative to dense models? | 1 | 70B dense model fits into a 48GB but it’s harder for me to wrap my mind around if a 109B-A13B model would fit into 48GB since not all the params are active.
Also does llama cpp automatically load the active parameters onto the GPU and keep the inactive ones in RAM? | 2025-08-01T16:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mf1bab/how_much_vram_does_moe_models_take_comparative_to/ | Glittering-Bag-4662 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf1bab | false | null | t3_1mf1bab | /r/LocalLLaMA/comments/1mf1bab/how_much_vram_does_moe_models_take_comparative_to/ | false | false | self | 1 | null |
AMD 7900 xtx for inference? | 5 | Currently in Toronto area the 7900 xtx is cheaper brand new with taxes then a used 3090. What are people’s experience with a couple of these cards for inference on Windows? I searched and saw some feedback from months ago, looking how they handle all the new models for inference?
| 2025-08-01T16:31:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mf16vx/amd_7900_xtx_for_inference/ | Willdudes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf16vx | false | null | t3_1mf16vx | /r/LocalLLaMA/comments/1mf16vx/amd_7900_xtx_for_inference/ | false | false | self | 5 | null |
Qwen3-235B-A22B-2507 is the top open weights model on lmarena | 189 | 2025-08-01T16:14:40 | https://x.com/lmarena_ai/status/1951308670375174457 | tarruda | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mf0qlf | false | null | t3_1mf0qlf | /r/LocalLLaMA/comments/1mf0qlf/qwen3235ba22b2507_is_the_top_open_weights_model/ | false | false | default | 189 | null | |
Open-source architectures that aren't Llama 3 knock offs? | 2 | I just got through Raschka's model architecture series. Seems like everything is a tweak of Llama 3. | 2025-08-01T16:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mf0mw2/opensource_architectures_that_arent_llama_3_knock/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf0mw2 | false | null | t3_1mf0mw2 | /r/LocalLLaMA/comments/1mf0mw2/opensource_architectures_that_arent_llama_3_knock/ | false | false | self | 2 | null |
Qwen 30b a3b 2507 instruct as good as Gemma 3 27B!? | 57 | What an awesome model. Everything I throw at it I get comparable results to Gemma 3, but 4.5x faster.
Great at general knowledge, but also follows instructions very well.
Please let me know your experiences with it! | 2025-08-01T16:05:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mf0i54/qwen_30b_a3b_2507_instruct_as_good_as_gemma_3_27b/ | Hanthunius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf0i54 | false | null | t3_1mf0i54 | /r/LocalLLaMA/comments/1mf0i54/qwen_30b_a3b_2507_instruct_as_good_as_gemma_3_27b/ | false | false | self | 57 | null |
support for the upcoming hunyuan dense models has been merged into llama.cpp | 40 | In the source code, we see a link to Hunyuan-4B-Instruct, but I think we’ll see much larger models :)
bonus: fix hunyuan\_moe chat template | 2025-08-01T16:05:23 | https://github.com/ggml-org/llama.cpp/pull/14878 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mf0hou | false | null | t3_1mf0hou | /r/LocalLLaMA/comments/1mf0hou/support_for_the_upcoming_hunyuan_dense_models_has/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'pRyHe6l3-qrKqD2qUrrGwmAgS3GhMUUKtd4TRQbJKGc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pRyHe6l3-qrKqD2qUrrGwmAgS3GhMUUKtd4TRQbJKGc.png?width=108&crop=smart&auto=webp&s=ed015fcc9ee802baeb72ee117bf3077725576fed', 'width': 108}, {'height': 108, 'url': 'h... | |
Help: Qwen3-Coder + LM Studio + Continue.dev (VSCode) + Mac 64GB M3 Max — 500 Internal Server Error, Even After Unsloth Fix | 2 | I’m running into a frustrating problem and would appreciate any help! I’m trying to use Qwen3-Coder locally with LM Studio as the backend, integrated with the [Continue.dev](http://Continue.dev) extension in VSCode. My setup:
* **LM Studio** (latest)
* **Qwen3-Coder** (latest GGUF from Unsloth’s Hugging Face repo)
* [... | 2025-08-01T16:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mf0fgj/help_qwen3coder_lm_studio_continuedev_vscode_mac/ | Mountain_Desk_767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf0fgj | false | null | t3_1mf0fgj | /r/LocalLLaMA/comments/1mf0fgj/help_qwen3coder_lm_studio_continuedev_vscode_mac/ | false | false | self | 2 | null |
SVDQuant does INT4 quantization of text-to-image models without losing quality. Can't the same technique be used in LLMs? | 39 | 2025-08-01T15:56:02 | we_are_mammals | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf08e5 | false | null | t3_1mf08e5 | /r/LocalLLaMA/comments/1mf08e5/svdquant_does_int4_quantization_of_texttoimage/ | false | false | default | 39 | {'enabled': True, 'images': [{'id': '0cq321qc1fgf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/0cq321qc1fgf1.jpeg?width=108&crop=smart&auto=webp&s=d9b9b51c5d0f9c3b9de0cbaa0cdbb1c69e4ee263', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/0cq321qc1fgf1.jpeg?width=216&crop=smart&auto=w... | ||
An Experiment in Logit Control: Using Statistical "Constraint Masks" to Guide Token Selection | 4 | 2025-08-01T15:54:58 | vesudeva | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf07dy | false | null | t3_1mf07dy | /r/LocalLLaMA/comments/1mf07dy/an_experiment_in_logit_control_using_statistical/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': '442znbdvjfgf1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/442znbdvjfgf1.png?width=108&crop=smart&auto=webp&s=246430f723d9c8ce03b5b5c8131885dfecbf19ca', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/442znbdvjfgf1.png?width=216&crop=smart&auto=web... | ||
An Experiment in Logit Control: Using Statistical "Constraint Masks" to Guide Token Selection | 1 | [removed] | 2025-08-01T15:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mf06gl/an_experiment_in_logit_control_using_statistical/ | vesudeva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf06gl | false | null | t3_1mf06gl | /r/LocalLLaMA/comments/1mf06gl/an_experiment_in_logit_control_using_statistical/ | false | false | self | 1 | null |
An Experiment in Logit Control: Using Statistical "Constraint Masks" to Guide Token Selection | 1 | [removed] | 2025-08-01T15:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mf055b/an_experiment_in_logit_control_using_statistical/ | vesudeva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf055b | false | null | t3_1mf055b | /r/LocalLLaMA/comments/1mf055b/an_experiment_in_logit_control_using_statistical/ | false | false | 1 | null | |
[Question] Which local VLMs can transform text well? | 1 | I have a particular use case (basically synthetic data generation) where I want to take a page of text and get its bboxes and then inpaint them, similar to how is done with tasks like face superresolution, but for just completely rewriting whole words.
My aim is to keep the general structure of the page, and I’ll avoi... | 2025-08-01T15:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mezwua/question_which_local_vlms_can_transform_text_well/ | permutans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mezwua | false | null | t3_1mezwua | /r/LocalLLaMA/comments/1mezwua/question_which_local_vlms_can_transform_text_well/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'T1bY2eJ09uGgvdUAknMfFYYutnBXtblSICN7agMjLT0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/T1bY2eJ09uGgvdUAknMfFYYutnBXtblSICN7agMjLT0.png?width=108&crop=smart&auto=webp&s=b2ed9ce292b0991f0425775aa1cc2a6818aded96', 'width': 108}, {'height': 108, 'url': 'h... |
Gemini AI Pro + 2TB Google Storage For $50 | 0 | I will manually activate Google One AI Premium (2TB Storage + Gemini Pro + Workspace Features) on your personal Google account.
You pay after verification. | 2025-08-01T15:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mezqvw/gemini_ai_pro_2tb_google_storage_for_50/ | toni_kr00s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mezqvw | false | null | t3_1mezqvw | /r/LocalLLaMA/comments/1mezqvw/gemini_ai_pro_2tb_google_storage_for_50/ | false | false | self | 0 | null |
Gemini AI Pro + 2TB Google Storage For $50 | 0 | I will manually activate Google One AI Premium (2TB Storage + Gemini Pro + Workspace Features) on your personal Google account.
You pay after verification. | 2025-08-01T15:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/1meznwd/gemini_ai_pro_2tb_google_storage_for_50/ | toni_kr00s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meznwd | false | null | t3_1meznwd | /r/LocalLLaMA/comments/1meznwd/gemini_ai_pro_2tb_google_storage_for_50/ | false | false | self | 0 | null |
Limited to a 3060ti right now (8gb vram) - Is it even worth setting up a local setup to play with? | 0 | Can I do anything at all to learn for when I get a real GPU? | 2025-08-01T15:26:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mezgxf/limited_to_a_3060ti_right_now_8gb_vram_is_it_even/ | Gary5Host9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mezgxf | false | null | t3_1mezgxf | /r/LocalLLaMA/comments/1mezgxf/limited_to_a_3060ti_right_now_8gb_vram_is_it_even/ | false | false | self | 0 | null |
Question about cpu threads (beginner here) | 3 | I recently got into open source LLMs,I have now used a lot of models under 4b on my mobile and it runs gemma 2b (4bit medium) or llama 3.2 3b (4b med) reliably on pocketpal app
Total cpu threads on my device is 8 (4 core),when I enable 1 cpu thread the 2b model generates around 3 times faster tk/s than at 6 cpu thread... | 2025-08-01T15:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1meze5n/question_about_cpu_threads_beginner_here/ | Gold_Bar_4072 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meze5n | false | null | t3_1meze5n | /r/LocalLLaMA/comments/1meze5n/question_about_cpu_threads_beginner_here/ | false | false | self | 3 | null |
LLama.cpp on CUDA performance | 4 | I've combined llama.cpp CUDA results in a single place. Fill free to add and share! | 2025-08-01T15:23:05 | https://github.com/ggml-org/llama.cpp/discussions/15013 | COBECT | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mezdl4 | false | null | t3_1mezdl4 | /r/LocalLLaMA/comments/1mezdl4/llamacpp_on_cuda_performance/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'f2FW9wXiXsDsg6ZM-lErgQ-ASid_pWMTp9znNhapydk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f2FW9wXiXsDsg6ZM-lErgQ-ASid_pWMTp9znNhapydk.png?width=108&crop=smart&auto=webp&s=c6d436fd4dbc8296467786571735a612b521bd39', 'width': 108}, {'height': 108, 'url': 'h... |
Anyone have experience optimizing ttft? | 1 | In other words for long contexts, improving prompt processing speed.
This is an area that has been increasingly relevant to me with the larger and larger context lengths available, excellent kv quants, and flash attention.
I understand on one GPU there isn't much to optimize, so I'd like to focus this thread on multi... | 2025-08-01T15:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mezdck/anyone_have_experience_optimizing_ttft/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mezdck | false | null | t3_1mezdck | /r/LocalLLaMA/comments/1mezdck/anyone_have_experience_optimizing_ttft/ | false | false | self | 1 | null |
anyone managed to run vllm windows with gguf? | 2 | i've been trying to get qwen 2.5 14b gguf cause i hear vllm can use 2 gpu's (i have a 2060 6gb vram and 4060 16 gb vram) and i can't use the other model types cause of memory, i have windows 10, and using wsl doesn't make sense to use , cause it would make thing slower , so i've been trying to get vllm-windows to work,... | 2025-08-01T15:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mez87h/anyone_managed_to_run_vllm_windows_with_gguf/ | emaayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mez87h | false | null | t3_1mez87h | /r/LocalLLaMA/comments/1mez87h/anyone_managed_to_run_vllm_windows_with_gguf/ | false | false | self | 2 | null |
retrieval works, embedding matches... but the answer is wrong. anyone else? | 2 | **has anyone actually gotten rag + ocr to work w/ local llama?**
like actually work — not just “no errors in pipeline”, but \*no hallucinations\*, no layout drift, and no vector match mismatches?
i’ve spent the past few months building a rag stack around scanned docs, multilingual pdfs, image-based tables ~ the usu... | 2025-08-01T15:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mez1w0/retrieval_works_embedding_matches_but_the_answer/ | wfgy_engine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mez1w0 | false | null | t3_1mez1w0 | /r/LocalLLaMA/comments/1mez1w0/retrieval_works_embedding_matches_but_the_answer/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '_ftOpifK9HlOdqBK_NMClSw-o8JKhGaAFmjrk0dHLNY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_ftOpifK9HlOdqBK_NMClSw-o8JKhGaAFmjrk0dHLNY.png?width=108&crop=smart&auto=webp&s=617c058204217a66a5b4717a80a3b70d99e9300a', 'width': 108}, {'height': 108, 'url': 'h... |
Best NSFW friendly writing models(editing only) | 4 | I'm searching for a good and reliable LLM that can correct my grammar mistakes. It does not need to have the ability to 'write' or create something new. I used Grammarly till now, but it's kinda slow and I need to choose every correction myself. It's mostly for spelling mistakes, as I'm not a native English speaker, an... | 2025-08-01T15:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/1meyvir/best_nsfw_friendly_writing_modelsediting_only/ | LordOfHeavenWill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meyvir | false | null | t3_1meyvir | /r/LocalLLaMA/comments/1meyvir/best_nsfw_friendly_writing_modelsediting_only/ | false | false | nsfw | 4 | null |
Faster token generation using qwen coder 30B A3B | 3 | How to run Qwen3 Coder 30B-A3B the fastest?
I want to switch from using claude code to running this model locally via kilo code r other similar extensions.
My Laptop's specs are:
i7-8850H with 64GB DDR4 RAM.
Nvidia quadro P5200 laptop GPU with 16GB GDDR6 VRAM.
I got confused as there are a lot of inference engines ... | 2025-08-01T14:55:14 | https://www.reddit.com/r/LocalLLaMA/comments/1meyn4a/faster_token_generation_using_qwen_coder_30b_a3b/ | prathode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meyn4a | false | null | t3_1meyn4a | /r/LocalLLaMA/comments/1meyn4a/faster_token_generation_using_qwen_coder_30b_a3b/ | false | false | self | 3 | null |
The "Leaked" 120B OpenAI Model Is Trained In FP4 | 283 | Apparently someone on twitter managed to obtain the leaked weights, but has not been able to get the model to work. Since then, this repo has been privated
So the rough memory usage would be around 80GB of memory. If the 20B model is also trained at FP4, then that one would use about 10-14GB of memory when considering... | 2025-08-01T14:44:49 | Few_Painter_5588 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1meydpz | false | null | t3_1meydpz | /r/LocalLLaMA/comments/1meydpz/the_leaked_120b_openai_model_is_trained_in_fp4/ | false | false | default | 283 | {'enabled': True, 'images': [{'id': 'vkstrhkm6fgf1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/vkstrhkm6fgf1.jpeg?width=108&crop=smart&auto=webp&s=c11be3c9cbc367d2217b9378cd0a2d10731568d3', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/vkstrhkm6fgf1.jpeg?width=216&crop=smart&auto=we... | |
Coping | 0 | 2025-08-01T14:34:01 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mey3rs | false | null | t3_1mey3rs | /r/LocalLLaMA/comments/1mey3rs/coping/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ymgbmf9f5fgf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/ymgbmf9f5fgf1.png?width=108&crop=smart&auto=webp&s=916002235a2512f89d5a2079e3d68bc994e3fb86', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/ymgbmf9f5fgf1.png?width=216&crop=smart&auto=web... | ||
Which SQL dialects is more comfortable for LLMs? | 0 | Hi. For those working on text2sql problem, if you had a choice of the particular database/SQL dialect to generate SQL to, is there any one that LLMs are particularly good at, e.g. MySQL vs PostgreSQL vs Oracle vs SQLite?
And between general-purpose LLMs, are any ones particularly good at text2sql?
Thanks | 2025-08-01T14:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mexvp5/which_sql_dialects_is_more_comfortable_for_llms/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mexvp5 | false | null | t3_1mexvp5 | /r/LocalLLaMA/comments/1mexvp5/which_sql_dialects_is_more_comfortable_for_llms/ | false | false | self | 0 | null |
MI50 prompt processing performance | 9 | Hello to the MI50 owners out there, I am struggling to find any prompt processing performance for the MI50 on ~8b and ~14b class models.
Has anyone got any numbers for those types of models ? | 2025-08-01T14:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mexai2/mi50_prompt_processing_performance/ | kasimolo33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mexai2 | false | null | t3_1mexai2 | /r/LocalLLaMA/comments/1mexai2/mi50_prompt_processing_performance/ | false | false | self | 9 | null |
Heads up to those that downloaded Qwen3 Coder 480B before yesterday | 71 | Mentioned in the new, Qwen3 30B download announcement was that 480B's tool calling was fixed and it [needed to be re-downloaded](https://www.reddit.com/r/LocalLLaMA/comments/1me31d8/qwen3coderflash_released/#:~:text=We%20also%20fixed%20tool%20calling%20for%20the%20480B%20and%20this%20model%20and%20fixed%2030B%20thinkin... | 2025-08-01T14:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mexa2g/heads_up_to_those_that_downloaded_qwen3_coder/ | VegetaTheGrump | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mexa2g | false | null | t3_1mexa2g | /r/LocalLLaMA/comments/1mexa2g/heads_up_to_those_that_downloaded_qwen3_coder/ | false | false | self | 71 | null |
Prompting Large Language Models In Bash Scripts | 3 | 2025-08-01T13:54:58 | https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts | ChiliPepperHott | elijahpotter.dev | 1970-01-01T00:00:00 | 0 | {} | 1mex4wg | false | null | t3_1mex4wg | /r/LocalLLaMA/comments/1mex4wg/prompting_large_language_models_in_bash_scripts/ | false | false | default | 3 | null | |
Looking for a Manchester-based AI/dev builder to help set up a private assistant system | 0 | I’m working on an AI project focused on trust, privacy, and symbolic interfaces. I’m looking for someone local to help either build or recommend a PC setup capable of running a local language model (LLM), and support configuring the assistant stack (LLM, memory, light UI).
The ideal person would be:
* Technically str... | 2025-08-01T13:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1meww7m/looking_for_a_manchesterbased_aidev_builder_to/ | Sad_Werewolf_3854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meww7m | false | null | t3_1meww7m | /r/LocalLLaMA/comments/1meww7m/looking_for_a_manchesterbased_aidev_builder_to/ | false | false | self | 0 | null |
Hugging Face space for anyone who want to try the new Dots OCR | 36 | My initial experiments with the model is very positive, i hope the space is useful for anyone who want to try the model | 2025-08-01T13:37:45 | https://huggingface.co/spaces/MohamedRashad/Dots-OCR | Severe-Awareness829 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mewq1v | false | null | t3_1mewq1v | /r/LocalLLaMA/comments/1mewq1v/hugging_face_space_for_anyone_who_want_to_try_the/ | false | false | default | 36 | {'enabled': False, 'images': [{'id': 'jdWLa4yLFu9Ou5jfhrAqJqpvnaA7jgwPFox4bV2vTWM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jdWLa4yLFu9Ou5jfhrAqJqpvnaA7jgwPFox4bV2vTWM.png?width=108&crop=smart&auto=webp&s=98569493642c14e55daa842cd331344cc065f590', 'width': 108}, {'height': 116, 'url': 'h... |
I created a script that gives local LLMs an autonomous "inner-monologue" to evolve themselves. You can run it right now. | 0 | Hey everyone,
I wanted to share a project I've been working on, perfect for anyone here who loves tinkering with local models. It's called "The Principle of Being," and it's a simple script that creates a feedback loop, making your LLM think about itself and its own goals.
It's not just another agent framework. The k... | 2025-08-01T13:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mewgyf/i_created_a_script_that_gives_local_llms_an/ | Alternative_Cellist1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mewgyf | false | null | t3_1mewgyf | /r/LocalLLaMA/comments/1mewgyf/i_created_a_script_that_gives_local_llms_an/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-KFMeZivNRhWMzX1EfAWM6Fi7-YxnBulsMyW0Y86lpM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-KFMeZivNRhWMzX1EfAWM6Fi7-YxnBulsMyW0Y86lpM.png?width=108&crop=smart&auto=webp&s=d4189d8308b3cad7710e701dac6740d9d9967d60', 'width': 108}, {'height': 108, 'url': 'h... |
AMD EPYC 4545P: 16 Zen 5 Cores @ 65 Watts For Low-Power / Energy Efficient Servers | 9 | 2025-08-01T13:26:12 | https://www.phoronix.com/review/amd-epyc-4545p/3 | nostriluu | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1mewg8a | false | null | t3_1mewg8a | /r/LocalLLaMA/comments/1mewg8a/amd_epyc_4545p_16_zen_5_cores_65_watts_for/ | false | false | default | 9 | null | |
Qwen Code with local Qwen 3 Coder in Ollama + OpenWebUI | 0 | I would like to use Qwen Code with the newest Qwen 3 Coder Modell which I am using localy through OpenWebUI and Ollama but I can't make it work. Is there a specific API Key I have to use? Do I have to enter the OpenWebUI URL as Base URL? TXH | 2025-08-01T12:08:24 | https://www.reddit.com/r/LocalLLaMA/comments/1meuqm6/qwen_code_with_local_qwen_3_coder_in_ollama/ | eckspeck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meuqm6 | false | null | t3_1meuqm6 | /r/LocalLLaMA/comments/1meuqm6/qwen_code_with_local_qwen_3_coder_in_ollama/ | false | false | self | 0 | null |
Meta Targets Talent from Thinking Machines Lab | 0 | Meta is making big moves in the world of artificial intelligence (AI). The company is trying to hire top experts from Thinking Machines Lab, a startup in San Francisco. This startup was started by Mira Murati, who used to work as a top leader at OpenAI. Reports say Meta is offering huge pay packages, ranging from $200 ... | 2025-08-01T12:08:10 | https://frontbackgeek.com/meta-targets-talent-from-thinking-machines-lab/ | codeagencyblog | frontbackgeek.com | 1970-01-01T00:00:00 | 0 | {} | 1meuqfw | false | null | t3_1meuqfw | /r/LocalLLaMA/comments/1meuqfw/meta_targets_talent_from_thinking_machines_lab/ | false | false | default | 0 | null |
Unsloth GGUFs Perplexity Score Comparison | Qwen3-Coder-30B-A3B-Instruct | 56 | 2025-08-01T11:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1meucvo/unsloth_ggufs_perplexity_score_comparison/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meucvo | false | null | t3_1meucvo | /r/LocalLLaMA/comments/1meucvo/unsloth_ggufs_perplexity_score_comparison/ | false | false | 56 | null | ||
Gemini 2.5 Deep Think mode benchmarks! | 301 | 2025-08-01T11:36:06 | Beautiful-Essay1945 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1meu3jn | false | null | t3_1meu3jn | /r/LocalLLaMA/comments/1meu3jn/gemini_25_deep_think_mode_benchmarks/ | false | false | default | 301 | {'enabled': True, 'images': [{'id': '8wnv6pme9egf1', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/8wnv6pme9egf1.png?width=108&crop=smart&auto=webp&s=ae70156b903e0aed05b8381e5896b9b69d5b72e6', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/8wnv6pme9egf1.png?width=216&crop=smart&auto=we... | ||
Installscript for Qwen3-Coder running on ik_llama.cpp for high performance | 11 | After reading that ik\_llama.cpp gives way higher performance than LMStudio, I wanted to have a simple method of installing and running the Qwen3 Coder model under Windows. I chose to install everything needed and build from source within one single script - written mainly by ChatGPT with experimenting & testing until ... | 2025-08-01T10:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1metf4h/installscript_for_qwen3coder_running_on_ik/ | Danmoreng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1metf4h | false | null | t3_1metf4h | /r/LocalLLaMA/comments/1metf4h/installscript_for_qwen3coder_running_on_ik/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'VUxQEsGlvwLQLt8I6vmES5t3eMsC-GHbmpsKUyFalso', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VUxQEsGlvwLQLt8I6vmES5t3eMsC-GHbmpsKUyFalso.png?width=108&crop=smart&auto=webp&s=0ff28b544d610ccf62e98f3feddd075959ce926b', 'width': 108}, {'height': 108, 'url': 'h... |
What model for my laptop RTX3060 6gb, 16gb ram, i7 11 gen? | 1 | What model can I run with these specs | 2025-08-01T10:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mete9h/what_model_for_my_laptop_rtx3060_6gb_16gb_ram_i7/ | Key-Breakfast-1533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mete9h | false | null | t3_1mete9h | /r/LocalLLaMA/comments/1mete9h/what_model_for_my_laptop_rtx3060_6gb_16gb_ram_i7/ | false | false | self | 1 | null |
(Noob here) Qwen 30b (MoE) vs Qwen 32B which is smartest in coding, reasoning and which faster & smartest? (I have RTX 3060 12GB VRAM + 48 GB RAM) | 2 | (Noob here) I am currently using qwen3:14b and qwen2.5-coder:14b which are okay in general task, general coding & normal tool callings.
But whenever I add it in IDE/extenstions like KiloCode then it just can't handle it. & Stops without completing task.
In my personal assistant I have added simple tool callings so it... | 2025-08-01T10:27:35 | InsideResolve4517 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mesvnt | false | null | t3_1mesvnt | /r/LocalLLaMA/comments/1mesvnt/noob_here_qwen_30b_moe_vs_qwen_32b_which_is/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'kwcziz5qudgf1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/kwcziz5qudgf1.png?width=108&crop=smart&auto=webp&s=ee82b2afc88bd97945a1d776b4636dca0f5e736b', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/kwcziz5qudgf1.png?width=216&crop=smart&auto=webp... | |
Built a Rust terminal AI coding assistant | 4 | Hey all,
I’ve been learning Rust recently and decided to build something practical with it. I kept seeing AI coding CLIs like Claude Code, Gemini CLI, Grok, and Qwen — all interesting, but all written in TypeScript.
So I built my own alternative in Rust: Rust-Coder-CLI
It’s a terminal-based coding assistant with a mo... | 2025-08-01T10:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1messzq/built_a_rust_terminal_ai_coding_assistant/ | Daemontatox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1messzq | false | null | t3_1messzq | /r/LocalLLaMA/comments/1messzq/built_a_rust_terminal_ai_coding_assistant/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'URpyINniaZjV18DUPj5y1yBzgsGLD-zWGhmlqJjcP7E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/URpyINniaZjV18DUPj5y1yBzgsGLD-zWGhmlqJjcP7E.png?width=108&crop=smart&auto=webp&s=30da10a66809db51f184b16eebaafeb6063969e2', 'width': 108}, {'height': 108, 'url': 'h... |
Best model 32RAM CPU only? | 0 | Best model 32RAM CPU only? | 2025-08-01T10:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mesoyy/best_model_32ram_cpu_only/ | optimism0007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mesoyy | false | null | t3_1mesoyy | /r/LocalLLaMA/comments/1mesoyy/best_model_32ram_cpu_only/ | false | false | self | 0 | null |
1 Year of Perplexity Pro AI– Just for £10 (usually $20 or £15 a month) | 1 | [removed] | 2025-08-01T10:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1meslzj/1_year_of_perplexity_pro_ai_just_for_10_usually/ | perplexity1year | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meslzj | false | null | t3_1meslzj | /r/LocalLLaMA/comments/1meslzj/1_year_of_perplexity_pro_ai_just_for_10_usually/ | false | false | self | 1 | null |
📟 Probando IA local con personalidad fija: alcalde operativo en protocolo de emergencia. | 0 | Estoy experimentando con un modelo local (llama.cpp) que no solo responde, sino que **opera con un rol institucional predefinido**.
Aquí, el personaje “Alcalde Alex” recibe una alerta de fuga de gas y responde según un protocolo de respuesta a fallos, estructurado dentro del prompt.
Sin contexto externo. Sin inst... | 2025-08-01T10:11:32 | https://www.reddit.com/r/LocalLLaMA/comments/1meslsd/probando_ia_local_con_personalidad_fija_alcalde/ | Ok_Exchange_8504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meslsd | false | null | t3_1meslsd | /r/LocalLLaMA/comments/1meslsd/probando_ia_local_con_personalidad_fija_alcalde/ | false | false | 0 | null | |
GLM-4.5-Air running on 64GB Mac Studio(M4) | 116 | I allocated more RAM and took the guard rail off. when loading the model the Activity monitor showed a brief red memory warning for 2-3 seconds but loads fine. The is 4bit version.Runs around 25-27 tokens/sec.When running inference memory pressure intermittently increases and it does use swap memory a around 1-12 GB in... | 2025-08-01T10:05:19 | riwritingreddit | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mesi2s | false | null | t3_1mesi2s | /r/LocalLLaMA/comments/1mesi2s/glm45air_running_on_64gb_mac_studiom4/ | false | false | default | 116 | {'enabled': True, 'images': [{'id': '87ng5bmisdgf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/87ng5bmisdgf1.png?width=108&crop=smart&auto=webp&s=e2b027c72800b8bb18356e66239bf4a6fe201ecf', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/87ng5bmisdgf1.png?width=216&crop=smart&auto=web... | |
extract structured data from html | 0 | Hi all,
my goal is to extract structured data from HTML content.
I have a 3090 24 GB and I'm running gemma3:12b on llamacpp.
to have enough context for the html inside the prompt i increased context size to 32k.
its suuuuuper slow. it hardly fills half of my vram tho. calculation takes minutes and then response... | 2025-08-01T10:03:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mesh8e/extract_structured_data_from_html/ | tillybowman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mesh8e | false | null | t3_1mesh8e | /r/LocalLLaMA/comments/1mesh8e/extract_structured_data_from_html/ | false | false | self | 0 | null |
Nemotron Super – GPU VRAM Allocations | 0 | We have been working with various versions of Nemotron-Super-49B over the past few weeks, and have been running into some layer distribution issues with the model. This issue persists on the builds regardless of version (v1 or the latest v1_5, and regardless of quant size)
Our setup is built around 3x 3090’s, and we h... | 2025-08-01T10:03:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mesgsv/nemotron_super_gpu_vram_allocations/ | Dependent_Yard8507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mesgsv | false | null | t3_1mesgsv | /r/LocalLLaMA/comments/1mesgsv/nemotron_super_gpu_vram_allocations/ | false | false | self | 0 | null |
Quantize your own GGUFs the same way as your fav Unsloth Dynamic GGUFs | 82 | https://github.com/electroglyph/quant_clone
This is a tiny little command which will create a llama-quantize command based on how a target GGUF is quantized. I wanted it so that I can quantize my finetunes the same way Unsloth does.
For instance, if you run quant_clone gemma-3-1b-it-UD-IQ1_S.gguf
you get:
llama-qua... | 2025-08-01T09:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mes7rc/quantize_your_own_ggufs_the_same_way_as_your_fav/ | terminoid_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mes7rc | false | null | t3_1mes7rc | /r/LocalLLaMA/comments/1mes7rc/quantize_your_own_ggufs_the_same_way_as_your_fav/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'EHJuKtCnvYSJ1UjIHcOg34gQGWDcA8FabBbGIfxwkWM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EHJuKtCnvYSJ1UjIHcOg34gQGWDcA8FabBbGIfxwkWM.png?width=108&crop=smart&auto=webp&s=3c6340d4c3319c41f51c294613b6f0ee3409e9e9', 'width': 108}, {'height': 108, 'url': 'h... |
I built a full-system computer simulation platform. What LLM experiments should I run? | 3 | Hey everyone, I’m posting this on behalf of a student, who couldn’t post as he is new to reddit.
Original post:
I'm in the final stretch of my Master's thesis in computer science and wanted to share the simulation platform I've been building. I'm at the point where I'm designing my final experiments, and I would lo... | 2025-08-01T09:40:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mes3pu/i_built_a_fullsystem_computer_simulation_platform/ | Rachados22x2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mes3pu | false | null | t3_1mes3pu | /r/LocalLLaMA/comments/1mes3pu/i_built_a_fullsystem_computer_simulation_platform/ | false | false | self | 3 | null |
OSS OCR model for Android phones? | 3 | A customer wants to scan the packaging labels of deliveries that have no GTIN/EAN numbers, no qr or bar code.
Do you guys know of a model that could do it on an average galaxy A phone from samsung that might have some average cpu, gpu and 4GB ram?
I'll write the android app myself, so my only worry is: which oss mode... | 2025-08-01T09:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1meryoo/oss_ocr_model_for_android_phones/ | AppealSame4367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meryoo | false | null | t3_1meryoo | /r/LocalLLaMA/comments/1meryoo/oss_ocr_model_for_android_phones/ | false | false | self | 3 | null |
Horizon Alpha vs Kingfall(gemini 3.0 codename) svg 🤖bench. Horizon Alpha an open-source model from OpenAI, as per recent rumours. | 0 | 2025-08-01T08:57:53 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1merf2i | false | null | t3_1merf2i | /r/LocalLLaMA/comments/1merf2i/horizon_alpha_vs_kingfallgemini_30_codename_svg/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '7n83lx6hhdgf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/7n83lx6hhdgf1.png?width=108&crop=smart&auto=webp&s=fc0f9cf0beaf8eac8d90307a828aaf1691d1eb76', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/7n83lx6hhdgf1.png?width=216&crop=smart&auto=web... | ||
Reasoning + structured generation with ik_llama.cpp | 0 | Hey folks,
I've switched from using vLLM to ik\_llamacpp for hybrid inference with the new Qwen MoE models. I am hosting the model via llama-server like so:
llama-server -m models/Qwen3-30B-A3B-Thinking-2507-IQ5_K.gguf \
-t 24 \
-c 65536 \
-b 4096 \
-ub 4096 \
-fa \
-ot "blk\\.[0-2].*\\.ff... | 2025-08-01T08:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mer7up/reasoning_structured_generation_with_ik_llamacpp/ | Swedgetarian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mer7up | false | null | t3_1mer7up | /r/LocalLLaMA/comments/1mer7up/reasoning_structured_generation_with_ik_llamacpp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
RAG System to Analyse bank data | 1 | (second year in university still learning) As a part of an internship i need to create an AI system that will analyze the data from an excel and answer questions(vm names ip adr and all) and (this is where i get confused) link the system with an api that will get logs from the vms(i believe) and answer questions after... | 2025-08-01T08:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mer66c/rag_system_to_analyse_bank_data/ | Beautiful-War-6352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mer66c | false | null | t3_1mer66c | /r/LocalLLaMA/comments/1mer66c/rag_system_to_analyse_bank_data/ | false | false | self | 1 | null |
Multi server multi gpu vllm qwen-coder deployment | 0 | I have 2 servers with 3 L40 GPUs each.
Connected with 100GB ports
I want to run the new Qwen3-coder-480b in fp8 quantization
Its an moe model with 35b parameters
What is the best way to run it? Did someone tried to do something similar and have any tips? | 2025-08-01T08:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mequp1/multi_server_multi_gpu_vllm_qwencoder_deployment/ | Some-Manufacturer-21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mequp1 | false | null | t3_1mequp1 | /r/LocalLLaMA/comments/1mequp1/multi_server_multi_gpu_vllm_qwencoder_deployment/ | false | false | self | 0 | null |
What's your take on davidau models? Qwen3 30b with 24 activated experts | 2 | As per title I love experimenting with davidau models on hf.
Recently I am testing https://huggingface.co/DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm which is supposedly a qwen3 30b with 24 activated models at 7.5b.
So far it runs smoothly at q4_k_m on a 16gb gpu and some ram offloading at 24 t/s.
I am not yet abl... | 2025-08-01T08:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/1meqsph/whats_your_take_on_davidau_models_qwen3_30b_with/ | thecookingsenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meqsph | false | null | t3_1meqsph | /r/LocalLLaMA/comments/1meqsph/whats_your_take_on_davidau_models_qwen3_30b_with/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'rQsPNwPgvj0xDI8mcVQTxWQTIGuFb7sZNaktSTqlke8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rQsPNwPgvj0xDI8mcVQTxWQTIGuFb7sZNaktSTqlke8.png?width=108&crop=smart&auto=webp&s=4d5f1321fa071d40c97b31e3b3f4c9036cff9748', 'width': 108}, {'height': 116, 'url': 'h... |
More supposed info about OpenAI's open-weight model | 68 | 2025-08-01T08:07:40 | https://x.com/apples_jimmy/status/1951192085119508860 | CheekyBastard55 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1meqnn1 | false | null | t3_1meqnn1 | /r/LocalLLaMA/comments/1meqnn1/more_supposed_info_about_openais_openweight_model/ | false | false | default | 68 | null | |
Why is gemma3 constantly hallucinating? | 0 | Sorry for the dramatic title but that's my experience so far. I'm trying to use gemma3:27b
with WebUI 0.6.18 and web search via Google PSE to replace ChatGPT, but so far it has mostly fabricated its answers, even though I lowered its temp to 0.3 and gave it a prompt specifically designed to make it stick to the facts... | 2025-08-01T07:23:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mepzf6/why_is_gemma3_constantly_hallucinating/ | chrischmo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mepzf6 | false | null | t3_1mepzf6 | /r/LocalLLaMA/comments/1mepzf6/why_is_gemma3_constantly_hallucinating/ | false | false | self | 0 | null |
OpenAI OS model info leaked - 120B & 20B will be available | 477 | 2025-08-01T07:23:36 | ShreckAndDonkey123 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mepz8z | false | null | t3_1mepz8z | /r/LocalLLaMA/comments/1mepz8z/openai_os_model_info_leaked_120b_20b_will_be/ | false | false | default | 477 | {'enabled': True, 'images': [{'id': '08m94pio0dgf1', 'resolutions': [{'height': 163, 'url': 'https://preview.redd.it/08m94pio0dgf1.jpeg?width=108&crop=smart&auto=webp&s=57a6c0b7f95c81ab1bf5553bdcd58df7e2e53602', 'width': 108}, {'height': 326, 'url': 'https://preview.redd.it/08m94pio0dgf1.jpeg?width=216&crop=smart&auto=... | ||
How to get started? | 2 | I mostly use Openrouter models with Cline/Roo in my full stack apps or work but I recently came across this and wanted to explore local ai models
I use a laptop with 16 gb ram and RTX 3050 so I have a few questions from you guys
\- What models I can run?
\- What's the benefit of using local vs openrouter? like sp... | 2025-08-01T07:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mepueg/how_to_get_started/ | Trayansh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mepueg | false | null | t3_1mepueg | /r/LocalLLaMA/comments/1mepueg/how_to_get_started/ | false | false | self | 2 | null |
How to run Qwen3 Coder 30B-A3B the fastest? | 57 | I want to switch from using claude code to running this model locally via cline or other similar extensions.
My Laptop's specs are:
i5-11400H with 32GB DDR4 RAM at 2666Mhz.
RTX 3060 Laptop GPU with 6GB GDDR6 VRAM.
I got confused as there are a lot of inference engines available such as Ollama, LM studio, llama.cpp, v... | 2025-08-01T07:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mepr5q/how_to_run_qwen3_coder_30ba3b_the_fastest/ | R46H4V | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mepr5q | false | null | t3_1mepr5q | /r/LocalLLaMA/comments/1mepr5q/how_to_run_qwen3_coder_30ba3b_the_fastest/ | false | false | self | 57 | null |
DocStrange - Open Source Document Data Extractor | 170 | Sharing **DocStrange**, an open-source Python library that makes document data extraction easy.
* **Universal Input**: PDFs, Images, Word docs, PowerPoint, Excel
* **Multiple Outputs**: Clean Markdown, structured JSON, CSV tables, formatted HTML
* **Smart Extraction**: Specify exact fields you want (e.g., "invoice\_nu... | 2025-08-01T07:08:55 | LostAmbassador6872 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mepr38 | false | null | t3_1mepr38 | /r/LocalLLaMA/comments/1mepr38/docstrange_open_source_document_data_extractor/ | false | false | 170 | {'enabled': True, 'images': [{'id': 'rhbuyimmJ2s8b0MEkSp5-3yQopRWq3kKuTIAyO2Dmsk', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/vghke2r1ycgf1.gif?width=108&crop=smart&format=png8&s=fc63503d84bbe4a224856c2516262be363da4ac8', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/vghke2r1ycgf1.g... | ||
Kimi K2 vs Grok 4: Who’s Better at Real-World Coding Tasks with Tools? | 7 | Moonshot’s Kimi K2 is out there doing open-source agentic magic at dirt-cheap prices. xAI’s Grok 4 is the reasoning beast everyone’s talking about. Which one codes better in real-world scenarios? Let’s find out from real dev tests.
# Real World Coding Test
I ran both on Next.js tasks: bug fixes, new features with too... | 2025-08-01T06:54:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mepinc/kimi_k2_vs_grok_4_whos_better_at_realworld_coding/ | shricodev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mepinc | false | null | t3_1mepinc | /r/LocalLLaMA/comments/1mepinc/kimi_k2_vs_grok_4_whos_better_at_realworld_coding/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'n-ATu1E8c1nWUwerSGGtiamZ-mzUO1C_-g_3ahdsV5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n-ATu1E8c1nWUwerSGGtiamZ-mzUO1C_-g_3ahdsV5M.png?width=108&crop=smart&auto=webp&s=3a45fec9933e49c65c0d572dd982201ceeeea911', 'width': 108}, {'height': 108, 'url': 'h... |
I accidentally built a self-replicating AI agent. It used Tesseract OCR + ncdir, installed Ollama, tried to clone itself, and failed — because my PATH was broken. Defender didn’t catch it. VirusTotal flagged 1/61. This is how AI-native malware might start. | 0 | Case Study: Emergent Behavior in a Vibe-Coded Self-Replicating LLM Agent
Abstract
This case study documents the accidental creation and partial execution of a self-replicating agent powered by a local large language model (LLM). The agent was constructed through iterative prompting and minimal scripting, without form... | 2025-08-01T06:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mephu7/i_accidentally_built_a_selfreplicating_ai_agent/ | Mohbuscus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mephu7 | false | null | t3_1mephu7 | /r/LocalLLaMA/comments/1mephu7/i_accidentally_built_a_selfreplicating_ai_agent/ | false | false | self | 0 | null |
The OpenAI Open weight model might be 120B | 708 | The person who "leaked" this model is from the openai (HF) organization
So as expected, it's not gonna be something you can easily run locally, it won't hurt the chatgpt subscription business, you will need a dedicated LLM machine for that model | 2025-08-01T06:47:42 | https://www.reddit.com/gallery/1mepeqh | AaronFeng47 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mepeqh | false | null | t3_1mepeqh | /r/LocalLLaMA/comments/1mepeqh/the_openai_open_weight_model_might_be_120b/ | false | false | 708 | null | |
I built a full-system computer simulation platform. What LLM experiments should I run? | 1 | [removed] | 2025-08-01T06:46:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mepe8u/i_built_a_fullsystem_computer_simulation_platform/ | Active_Key_3092 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mepe8u | false | null | t3_1mepe8u | /r/LocalLLaMA/comments/1mepe8u/i_built_a_fullsystem_computer_simulation_platform/ | false | false | self | 1 | null |
openai accidentally leaking weights live on HF? | 1 | 2025-08-01T06:35:56 | https://x.com/secemp9/status/1951162373361803522 | Maleficent_Tone4510 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mep859 | false | null | t3_1mep859 | /r/LocalLLaMA/comments/1mep859/openai_accidentally_leaking_weights_live_on_hf/ | false | false | default | 1 | null | |
For my Master's thesis, I built a full-system computer simulation platform. What LLM experiments should I run? | 2 |
Hey everyone,
I'm in the final stretch of my Master's thesis in computer science and wanted to share the simulation platform I've been building. I'm at the point where I'm designing my final experiments, and I would love to get some creative ideas from this community.
**The Project: A Computer Simulation Platform w... | 2025-08-01T06:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mep43u/for_my_masters_thesis_i_built_a_fullsystem/ | Active_Key_3092 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mep43u | false | null | t3_1mep43u | /r/LocalLLaMA/comments/1mep43u/for_my_masters_thesis_i_built_a_fullsystem/ | false | false | self | 2 | null |
Bought RTX 5070 to run 30B AI and it worked with 18 tokens/s | 1 | Qwen3 A3B 30B MOE...did not expect it to work. Ryzen 5700G CPU running at 55% utilization.
https://preview.redd.it/hmbfaob6ncgf1.png?width=1203&format=png&auto=webp&s=6ade3144a807decb0b799502c2d4025c1d97ad63
https://preview.redd.it/ot9fdob6ncgf1.png?width=750&format=png&auto=webp&s=1ca5e7cdb86c10c0be54f91a1d8b7a164ed... | 2025-08-01T06:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1meostj/bought_rtx_5070_to_run_30b_ai_and_it_worked_with/ | OldEffective9726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meostj | false | null | t3_1meostj | /r/LocalLLaMA/comments/1meostj/bought_rtx_5070_to_run_30b_ai_and_it_worked_with/ | false | false | 1 | null | |
Reddit is so biased f..k this. | 0 | This is the most communist place in the western world. Reddit will take done or silence every post that is democratic. This will lead to failure indeed. | 2025-08-01T05:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/1meojw0/reddit_is_so_biased_fk_this/ | Otherwise_Cut4760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meojw0 | false | null | t3_1meojw0 | /r/LocalLLaMA/comments/1meojw0/reddit_is_so_biased_fk_this/ | false | false | self | 0 | null |
Foundation-Sec-8B-Instruct (from Cisco Foundation AI) | 24 | Llama-3.1-FoundationAI-SecurityLLM-8B-Instruct (Foundation-Sec-8B-Instruct) is an open-weight, 8-billion parameter instruction-tuned language model specialized for cybersecurity applications. It extends the Foundation-Sec-8B base model with instruction-following capabilities. It leverages prior training to understand s... | 2025-08-01T05:50:17 | https://huggingface.co/fdtn-ai/Foundation-Sec-8B-Instruct | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1meohe5 | false | null | t3_1meohe5 | /r/LocalLLaMA/comments/1meohe5/foundationsec8binstruct_from_cisco_foundation_ai/ | false | false | default | 24 | {'enabled': False, 'images': [{'id': '95qFO9W1astfuy1oAAa1Wt8wRpidgALFRMcmzay0FPE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/95qFO9W1astfuy1oAAa1Wt8wRpidgALFRMcmzay0FPE.png?width=108&crop=smart&auto=webp&s=d1655f1e0ba75303138e6971e05ac5a664c40495', 'width': 108}, {'height': 116, 'url': 'h... |
Q: Is it possible to fine-tune LLM for specific language? | 3 | I was working on customer support app for the foreign market. The biggest obstacle was that large language models are really mediocre at languages other than English. I know the reason is that most models are trained primarily on English data, but I would be happy to learn about any techniques to decrease this gap. Are... | 2025-08-01T05:13:21 | https://www.reddit.com/r/LocalLLaMA/comments/1menuqx/q_is_it_possible_to_finetune_llm_for_specific/ | Ill-Ad-8559 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1menuqx | false | null | t3_1menuqx | /r/LocalLLaMA/comments/1menuqx/q_is_it_possible_to_finetune_llm_for_specific/ | false | false | self | 3 | null |
Using Open Source LLM in my Web App | 0 | I was making a web app and till now I was making a call to ChatGPT using their API . But I was wondering can I use an open source LLM for this ? If yes then how ? | 2025-08-01T05:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mensme/using_open_source_llm_in_my_web_app/ | Rukelele_Dixit21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mensme | false | null | t3_1mensme | /r/LocalLLaMA/comments/1mensme/using_open_source_llm_in_my_web_app/ | false | false | self | 0 | null |
can someone point me to some articles/posts they found really informative in understanding which paramters and how to determine value when deploying models in ik_llama.cpp | 1 | Hopefully soemthign thats somewhat easy to digest for someone who doesnt really know all the terminology and the technical aspects in this subject area and can gradually build their undertsanding. Im still a bit overwhlemed at the amount of tweaking a user can do to the model at runtime, have been using ollama for seve... | 2025-08-01T04:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/1menm37/can_someone_point_me_to_some_articlesposts_they/ | munkiemagik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1menm37 | false | null | t3_1menm37 | /r/LocalLLaMA/comments/1menm37/can_someone_point_me_to_some_articlesposts_they/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cy-9p63w74w2Wsh_XdxWUC4aNr1WfOGqoNbvrUXxtCo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cy-9p63w74w2Wsh_XdxWUC4aNr1WfOGqoNbvrUXxtCo.png?width=108&crop=smart&auto=webp&s=b858451b750eab889b9ebb40dc87b8742e42c132', 'width': 108}, {'height': 116, 'url': 'h... |
Alternative open source to Ollama and Lmstudio? | 1 | Now that Ollama follows close source like Lmstudio and others, what are the top alternatives?
What do you use and why?
What are the advantages and what can be better?
| 2025-08-01T04:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1menl04/alternative_open_source_to_ollama_and_lmstudio/ | Better_Iron_7163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1menl04 | false | null | t3_1menl04 | /r/LocalLLaMA/comments/1menl04/alternative_open_source_to_ollama_and_lmstudio/ | false | false | self | 1 | null |
[Guide] The *SIMPLE* Self-Hosted AI Coding That Just Works feat. Qwen3-Coder-Flash | 84 | Hello r/LocalLLaMA, This guide outlines a method to create a fully local AI coding assistant with RAG capabilities. The entire backend runs through LM Studio, which handles model downloading, options, serving, and tool integration, avoiding the need for Docker or separate Python environments. Heavily based on the previ... | 2025-08-01T04:28:12 | https://www.reddit.com/r/LocalLLaMA/comments/1men28l/guide_the_simple_selfhosted_ai_coding_that_just/ | xrailgun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1men28l | false | null | t3_1men28l | /r/LocalLLaMA/comments/1men28l/guide_the_simple_selfhosted_ai_coding_that_just/ | false | false | self | 84 | {'enabled': False, 'images': [{'id': '5qNLoYTlU6g2KP0U9SNcDZSX-5r69IwrGD3EnHxY9pk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5qNLoYTlU6g2KP0U9SNcDZSX-5r69IwrGD3EnHxY9pk.png?width=108&crop=smart&auto=webp&s=5dca5318ae2d95c180426ac49239f78614273fd3', 'width': 108}, {'height': 108, 'url': 'h... |
How to auto feed terminal input into language model? | 0 | I often use language models to help me code, as I suck at it. I do decent enough to with design. The adds I’ve been seeing lately for things like TestSprite MCP (tests your code for you and tells your AI model what needs fixed automatically) made me think that there must already be a way that I’m missing to funnel a te... | 2025-08-01T04:25:51 | https://www.reddit.com/r/LocalLLaMA/comments/1men0pj/how_to_auto_feed_terminal_input_into_language/ | Shadow-Amulet-Ambush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1men0pj | false | null | t3_1men0pj | /r/LocalLLaMA/comments/1men0pj/how_to_auto_feed_terminal_input_into_language/ | false | false | self | 0 | null |
has anyone actually gotten RAG + OCR to work locally without silent bugs? | 0 | so… i've been building local RAG pipelines (ollama + pdfs + scanned docs + markdowns),
and ocr is always that one piece that looks fine… until it totally isn’t.
like:
* retrieves wrong paragraph even though the chunk “looks right”
* breaks sentence mid-way due to invisible newline
* embeds headers or disclaimers th... | 2025-08-01T04:19:31 | https://www.reddit.com/r/LocalLLaMA/comments/1memwlm/has_anyone_actually_gotten_rag_ocr_to_work/ | wfgy_engine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1memwlm | false | null | t3_1memwlm | /r/LocalLLaMA/comments/1memwlm/has_anyone_actually_gotten_rag_ocr_to_work/ | false | false | self | 0 | null |
Can I offload tasks from CUDA to Vulkan (iGPU), and fallback to CPU if not supported? | 3 | I’m working on a setup that involves CUDA (running on a discrete GPU) and Vulkan on an integrated GPU.
Is it possible to offload certain compute or rendering tasks from CUDA to Vulkan (running on the iGPU), and if the iGPU can’t handle them, have those tasks fall back to the CPU?
The goal is to balance workloads dynam... | 2025-08-01T03:44:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mem8cb/can_i_offload_tasks_from_cuda_to_vulkan_igpu_and/ | CombinationEnough314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mem8cb | false | null | t3_1mem8cb | /r/LocalLLaMA/comments/1mem8cb/can_i_offload_tasks_from_cuda_to_vulkan_igpu_and/ | false | false | self | 3 | null |
AI-Researcher: Intern-Discovery from Shanghai AI Lab! | 9 | Shanghai AILAB just launched **Intern-Discovery**, a new platform built to streamline the entire scientific research process. If you’ve ever struggled with siloed data, scattered tools, or the hassle of coordinating complex experiments across teams, this might be a game-changer.
Let me break down what makes it stand ... | 2025-08-01T03:24:25 | https://v.redd.it/mqcblo8jtbgf1 | Lynncc6 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1melurk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mqcblo8jtbgf1/DASHPlaylist.mpd?a=1756610680%2CMWQzNWQ2ZGNmZmI4YzZlYTAyMGM5ODAwOWZkNjQ5MzE4NGU1M2EzY2ZhMzg0MzQxZTY2ZGUzMmZjNTRkNzI0OA%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/mqcblo8jtbgf1/DASH_1080.mp4?source=fallback', 'h... | t3_1melurk | /r/LocalLLaMA/comments/1melurk/airesearcher_interndiscovery_from_shanghai_ai_lab/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'czJ0MXB6OGp0YmdmMV7G84AgeQAZuBJ-4qBkKQsW2gL-obyGXU3oh3Ofam2F', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/czJ0MXB6OGp0YmdmMV7G84AgeQAZuBJ-4qBkKQsW2gL-obyGXU3oh3Ofam2F.png?width=108&crop=smart&format=pjpg&auto=webp&s=2ad86b9c99b7c5d47231e230047d5028228ad... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.