title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen’s TRIPLE release this week + Vid Gen model coming | 237 | Qwen just dropped a triple update. After months out of the spotlight, Qwen is back and bulked up. You can literally see the gains; the training shows. I was genuinely impressed.
I once called Alibaba “the first Chinese LLM team to evolve from engineering to product.” This week, I need to upgrade that take: it’s now se... | 2025-07-25T14:54:14 | https://www.reddit.com/gallery/1m91b98 | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m91b98 | false | null | t3_1m91b98 | /r/LocalLLaMA/comments/1m91b98/qwens_triple_release_this_week_vid_gen_model/ | false | false | 237 | null | |
A Perspective on DeepSeek and "Whataboutism" from a Mainlander | 1 | [removed] | 2025-07-25T14:32:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m90qvp/a_perspective_on_deepseek_and_whataboutism_from_a/ | Sanitizer8819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m90qvp | false | null | t3_1m90qvp | /r/LocalLLaMA/comments/1m90qvp/a_perspective_on_deepseek_and_whataboutism_from_a/ | false | false | nsfw | 1 | null |
How i can instal Mixtral Q4 | 1 | [removed] | 2025-07-25T14:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m90n4j/how_i_can_instal_mixtral_q4/ | Physical-Ad4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m90n4j | false | null | t3_1m90n4j | /r/LocalLLaMA/comments/1m90n4j/how_i_can_instal_mixtral_q4/ | false | false | self | 1 | null |
How can i install Mixtral Q4 on lama,i cant find how | 1 | [removed] | 2025-07-25T14:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m90jy6/how_can_i_install_mixtral_q4_on_lamai_cant_find/ | Physical-Ad4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m90jy6 | false | null | t3_1m90jy6 | /r/LocalLLaMA/comments/1m90jy6/how_can_i_install_mixtral_q4_on_lamai_cant_find/ | false | false | self | 1 | null |
I created an open-source macOS AI browser that uses MLX and Gemma 3n, feel free to fork it! | 140 | This is an AI web browser that uses local AI models. It's still very early, FULL of bugs and missing key features as a browser, but still good to play around with it.
Download it from [Github](https://github.com/nuance-dev/Web)
Note: AI features only work with M series chips. | 2025-07-25T14:06:36 | https://v.redd.it/fculp27z11ff1 | sirjoaco | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m903il | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fculp27z11ff1/DASHPlaylist.mpd?a=1756044410%2CZTc0YTg4MjU0NWE1ZmE2YWYwYmZjMDM4M2E4MjgzODE3YjE4NjYzYTZjMDhkNGJkMjc4MGViNTI5ZmZhNTJmZA%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/fculp27z11ff1/DASH_1080.mp4?source=fallback', 'h... | t3_1m903il | /r/LocalLLaMA/comments/1m903il/i_created_an_opensource_macos_ai_browser_that/ | false | false | 140 | {'enabled': False, 'images': [{'id': 'NGRzMm4wN3oxMWZmMcBzbBspgkh2rR30WizNU_-HmGEenkhMN_T-yrpm1ere', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/NGRzMm4wN3oxMWZmMcBzbBspgkh2rR30WizNU_-HmGEenkhMN_T-yrpm1ere.png?width=108&crop=smart&format=pjpg&auto=webp&s=fcdf109b4fdfd5cf54e3e2e866680aebaec3a... | |
[Release] Arkhon Memory SDK – Local, lightweight long-term memory for LLM agents (pip install arkhon-memory) | 11 | Hi all,
I'm a solo dev and first-time open-source maintainer. I just released my first Python package: \*\*Arkhon Memory SDK\*\* – a lightweight, local-first memory module for autonomous LLM agents. This is part of my bigger project, but I thought this component could be useful for some of you.
\- **No vector DBs,... | 2025-07-25T14:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m9019j/release_arkhon_memory_sdk_local_lightweight/ | kissgeri96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9019j | false | null | t3_1m9019j | /r/LocalLLaMA/comments/1m9019j/release_arkhon_memory_sdk_local_lightweight/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'YGnv7M3dsPp97Dq77LwXuer94UoHKkGm7B7JRQJXITI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YGnv7M3dsPp97Dq77LwXuer94UoHKkGm7B7JRQJXITI.png?width=108&crop=smart&auto=webp&s=7a209445a0ca39ec32cc43c3974f0c86515e04f3', 'width': 108}, {'height': 108, 'url': 'h... |
Currently working 4 jobs (total comp $750K USD), 7 advisory positions (various startups, $1.4M in equity), 1 PHD (Efficient Artificial Intelligence). | 1 | [removed] | 2025-07-25T13:56:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m8zu13/currently_working_4_jobs_total_comp_750k_usd_7/ | Different_Bed_8679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8zu13 | false | null | t3_1m8zu13 | /r/LocalLLaMA/comments/1m8zu13/currently_working_4_jobs_total_comp_750k_usd_7/ | false | false | self | 1 | null |
Data Quality and Size for LoRa | 3 | I want to fine-tune a LlaVa model to include new details about an image. Think about medical, I want the model to mention a new condition a group of doctors described after looking at the image.
I have pairs of images and new details, given in a description.
I want to fine-tune the model. In my first batch of e... | 2025-07-25T13:37:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m8zeg8/data_quality_and_size_for_lora/ | Emotional-Sundae4075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8zeg8 | false | null | t3_1m8zeg8 | /r/LocalLLaMA/comments/1m8zeg8/data_quality_and_size_for_lora/ | false | false | self | 3 | null |
mini-swe-agent achieves 65% on SWE-bench in just 100 lines of python code | 54 | In 2024, we developed SWE-bench and SWE-agent at Princeton University and helped kickstart the coding agent revolution.
Back then, LMs were optimized to be great at chatting, but not much else. This meant that agent scaffolds had to get very creative (and complicated) to make LMs perform useful work.
But in 2025 LMs ... | 2025-07-25T13:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m8z2ut/minisweagent_achieves_65_on_swebench_in_just_100/ | klieret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8z2ut | false | null | t3_1m8z2ut | /r/LocalLLaMA/comments/1m8z2ut/minisweagent_achieves_65_on_swebench_in_just_100/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': '15NdOHi3R2OvHOa0887eAppffC5IFF0TVIDnJkZPf7M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/15NdOHi3R2OvHOa0887eAppffC5IFF0TVIDnJkZPf7M.png?width=108&crop=smart&auto=webp&s=891acb349e03755473266d709a20b526d0a3b86c', 'width': 108}, {'height': 108, 'url': 'h... |
Performance of Minisforum AI X1 Pro compared to Mac Mini M1 | 1 | [removed] | 2025-07-25T13:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m8yw21/performance_of_minisforum_ai_x1_pro_compared_to/ | Affectionate-Row1305 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8yw21 | false | null | t3_1m8yw21 | /r/LocalLLaMA/comments/1m8yw21/performance_of_minisforum_ai_x1_pro_compared_to/ | false | false | self | 1 | null |
AMD Radeon AI PRO R9700 - when can I buy it? | 7 | Dear, AMD!
You have a potential segment of AI PRO R9700 consumers who cannot afford to buy an entire workstation based on several R9700s,
but these people (including me) have enough money to independently build a PC based on 2xR9700 and a consumer motherboard with cheaper Udimm memory.
I will be very exhausted if I ... | 2025-07-25T13:15:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m8yvxd/amd_radeon_ai_pro_r9700_when_can_i_buy_it/ | Mundane_Progress_898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8yvxd | false | null | t3_1m8yvxd | /r/LocalLLaMA/comments/1m8yvxd/amd_radeon_ai_pro_r9700_when_can_i_buy_it/ | false | false | self | 7 | null |
GLM-4.1V-9B-Thinking - claims to "match or surpass Qwen2.5-72B" on many tasks | 180 | I'm happy to see this as my experience with these models for image recognition isn't very impressive. They mostly can't even tell when pictures are sideways, for example. | 2025-07-25T12:18:54 | https://github.com/THUDM/GLM-4.1V-Thinking | Pristine-Woodpecker | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m8xmy9 | false | null | t3_1m8xmy9 | /r/LocalLLaMA/comments/1m8xmy9/glm41v9bthinking_claims_to_match_or_surpass/ | false | false | default | 180 | {'enabled': False, 'images': [{'id': '35YSJo7Lmen5bPXSpg8onBoKMeQEGpDpKRxTziQDzj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/35YSJo7Lmen5bPXSpg8onBoKMeQEGpDpKRxTziQDzj8.png?width=108&crop=smart&auto=webp&s=6f8ce37456b595d44518bc9dbb50bfbbdc4bdd6f', 'width': 108}, {'height': 108, 'url': 'h... |
Guidance on diving deep into LLMs | 0 | Hey everyone,
I’m diving deeper into the world of Large Language Models (LLMs) and had a many questions I was hoping to get input on from the community. Feel free to give answer to any of my questions! You don’t have to answer all!
1. LLM Frameworks:
I’m currently using LangChain and recently exploring LangGraph. Are... | 2025-07-25T12:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m8xhxp/guidance_on_diving_deep_into_llms/ | Far-Run-3778 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8xhxp | false | null | t3_1m8xhxp | /r/LocalLLaMA/comments/1m8xhxp/guidance_on_diving_deep_into_llms/ | false | false | self | 0 | null |
Building Paradigm, Looking for right audience and feedbacks | 0 | bulding paradigm, application for local inference on nvidia gpu, cpu i launched mvp of paradigm , its scrappy , buggy. Finding the right people to help me build this. It changes the models that are compatible to gguf, save the gguf on your system for your use and run inference.
Link - > [https://github.com/NotKshitiz... | 2025-07-25T12:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1m8xf7n/building_paradigm_looking_for_right_audience_and/ | Xitizdumb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8xf7n | false | null | t3_1m8xf7n | /r/LocalLLaMA/comments/1m8xf7n/building_paradigm_looking_for_right_audience_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '91Xvi8EOxe-D17g2uyNq1I_HW8MYc05G7Zm2-1fA-wA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/91Xvi8EOxe-D17g2uyNq1I_HW8MYc05G7Zm2-1fA-wA.png?width=108&crop=smart&auto=webp&s=ae171aebdcd8ce9ab4e967566bf659337be51618', 'width': 108}, {'height': 108, 'url': 'h... |
I built a Hardware AI Code Editor with real-time profiling and AI optimization. We’re opening the preview version for free to a few users. If you’re interested, save your spot on our Discord | 0 | 2025-07-25T12:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m8xf6l/i_built_a_hardware_ai_code_editor_with_realtime/ | Firm_Protection4004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8xf6l | false | null | t3_1m8xf6l | /r/LocalLLaMA/comments/1m8xf6l/i_built_a_hardware_ai_code_editor_with_realtime/ | false | false | 0 | null | ||
A real-time video subtitle translation tool available across the entire web (the video must have audio) | 1 | [removed] | 2025-07-25T11:50:16 | https://www.reddit.com/gallery/1m8x1vv | Ok-Opposite-6725 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m8x1vv | false | null | t3_1m8x1vv | /r/LocalLLaMA/comments/1m8x1vv/a_realtime_video_subtitle_translation_tool/ | false | false | 1 | null | |
How important is to have PRO 6000 Blackwell running on 16 PCIE lanes? | 10 | Greetings, we're a state-owned college, and we want to acquire an IA workstation. We have a strict budget and cannot surpass it, so working with our providers, they gave us two options with our budget
1. One Threadripper PRO 9955WX, with WS WRX90E-SAGE SE, 1 PRO 6000 Blackwell, and 256 GB RAM
2. One AMD Ryzen 9 99... | 2025-07-25T11:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m8wuy7/how_important_is_to_have_pro_6000_blackwell/ | ferkte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8wuy7 | false | null | t3_1m8wuy7 | /r/LocalLLaMA/comments/1m8wuy7/how_important_is_to_have_pro_6000_blackwell/ | false | false | self | 10 | null |
Good RVC to fine tune TTS? | 2 | I want to fine tune TTS but there are plenty on the market so confused which one to use.
Currently using chatterbox for voice cloning to TTS, but for some voices the output is not accurate to the reference audio's pace and tone. If the reference audio is normal speech rate, the output audio will be a bit fast, despite... | 2025-07-25T11:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m8ws0i/good_rvc_to_fine_tune_tts/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8ws0i | false | null | t3_1m8ws0i | /r/LocalLLaMA/comments/1m8ws0i/good_rvc_to_fine_tune_tts/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Q-4nHKtxe8ysDLf-3c_t7qPnkEACaIq-sWYGlFccCek', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q-4nHKtxe8ysDLf-3c_t7qPnkEACaIq-sWYGlFccCek.png?width=108&crop=smart&auto=webp&s=808c91e6548b11d6746644706e0443a78ab2865d', 'width': 108}, {'height': 108, 'url': 'h... |
Do models make fun of other models? | 13 | I was just chatting with Claude about my experiments with Aider and qwen2.5-coder (7b & 14b).
i wasn't ready for Claudes response. so good.
FWIW i'm trying codellama:13b next.
Any advice for a local coding model and Aider on RTX3080 10GB? | 2025-07-25T11:20:47 | Fussy-Fur3608 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8wi62 | false | null | t3_1m8wi62 | /r/LocalLLaMA/comments/1m8wi62/do_models_make_fun_of_other_models/ | false | false | default | 13 | {'enabled': True, 'images': [{'id': '8sdpzbq280ff1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/8sdpzbq280ff1.png?width=108&crop=smart&auto=webp&s=89b6015ee2a0d88ec0bac662235da5629baf1bbb', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/8sdpzbq280ff1.png?width=216&crop=smart&auto=web... | |
Open Source Companion Thread | 25 | I'm about to start building my personal AI companion and during my research came across this [awesome list](https://github.com/LongHZ140516/Awesome-GrokAni-VituralMate) of AI companion projects that I wanted to share with the community.
| Companion | Lang | License | Stack | Category |
| -- | -- | -- | -- | -- |
| [枫云... | 2025-07-25T11:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m8wg2r/open_source_companion_thread/ | aratahikaru5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8wg2r | false | null | t3_1m8wg2r | /r/LocalLLaMA/comments/1m8wg2r/open_source_companion_thread/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'ATxuExX8NyOPspwvWc3RaugJt6ykNFNtMVc78aczGTU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ATxuExX8NyOPspwvWc3RaugJt6ykNFNtMVc78aczGTU.png?width=108&crop=smart&auto=webp&s=5e7fc321ec10284644abea084a0b60656c01283e', 'width': 108}, {'height': 108, 'url': 'h... |
New Qwen3-235B update is crushing old models in benchmarks | 128 | Check out this chart comparing the latest Qwen3-235B-A22B-2507 models (Instruct and Thinking) to the older versions. The improvements are huge across different tests:
• GPQA (Graduate-level reasoning): 81 → 71
• AIME2025 (Math competition problems): 92 → 81
• LiveCodeBench v6 (Code generation and debugging): 74 → 5... | 2025-07-25T11:07:09 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8w9ah | false | null | t3_1m8w9ah | /r/LocalLLaMA/comments/1m8w9ah/new_qwen3235b_update_is_crushing_old_models_in/ | false | false | 128 | {'enabled': True, 'images': [{'id': '5z0PiohPQ5P8oWxfKaPaT1JALPktWest18Z3iN05GrQ', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/q009687760ff1.jpeg?width=108&crop=smart&auto=webp&s=f76378abbbe79bad791d59ab511364ecf839f4ba', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/q009687760ff1.jp... | ||
Smaller Qwen Models next week!! | 645 | Looks like we will get smaller instruct and reasoning variants of Qwen3 next week. Hopefully smaller Qwen3 coder variants aswell. | 2025-07-25T11:04:28 | R46H4V | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8w7ny | false | null | t3_1m8w7ny | /r/LocalLLaMA/comments/1m8w7ny/smaller_qwen_models_next_week/ | false | false | default | 645 | {'enabled': True, 'images': [{'id': '752ts71q50ff1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/752ts71q50ff1.png?width=108&crop=smart&auto=webp&s=d9445aa998ff7b1cb74e082152702795b220a5ac', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/752ts71q50ff1.png?width=216&crop=smart&auto=web... | |
N + N size GPU != 2N sized GPU, go big if you can | 37 | Buy the largest GPU that you can really afford to. Besides the obvious cost of additional electricity, PCI slots, physical space, cooling etc. Multiple GPUs can be annoying.
For example, I have some 16gb GPUs, 10 of them when trying to run Kimi, each layer is 7gb. If I load 2 layers on each GPU, the most context ... | 2025-07-25T10:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m8vu80/n_n_size_gpu_2n_sized_gpu_go_big_if_you_can/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8vu80 | false | null | t3_1m8vu80 | /r/LocalLLaMA/comments/1m8vu80/n_n_size_gpu_2n_sized_gpu_go_big_if_you_can/ | false | false | self | 37 | null |
Beginner Here! Anyone knows how to install llama-cpp-python within a Singularity container or use in an HPC? | 0 | Hi! Kinda new to reddit, so I hope I post this to the right community.
I am currently experimenting with 67B model. To run this, getting the quantization model will be really helpful for my system. However, I found myself stuck in llama-cpp-python installation for the last 3 days. I also have tried other file type, li... | 2025-07-25T10:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m8vsge/beginner_here_anyone_knows_how_to_install/ | Fluffy-Cress-4356 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8vsge | false | null | t3_1m8vsge | /r/LocalLLaMA/comments/1m8vsge/beginner_here_anyone_knows_how_to_install/ | false | false | self | 0 | null |
Tensor parallel - pcie bandwidth requirement | 2 | Hi,
Can anyone say is PCI 4.0 16X going to be bottleneck with tensor parallel inference, lets say with 4090 or 7900 XTX cards 2 or 4?
Is there anywhere data how much inference is using PCIE bandwidth, can it be measured during inference?
I have currently 2 7900 XTX in 8x pcie 4.0 and both cards uses max 200W du... | 2025-07-25T10:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m8vqnz/tensor_parallel_pcie_bandwidth_requirement/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8vqnz | false | null | t3_1m8vqnz | /r/LocalLLaMA/comments/1m8vqnz/tensor_parallel_pcie_bandwidth_requirement/ | false | false | self | 2 | null |
I wrote an AI Agent that works better than I expected. Here are 10 learnings. | 10 | I've been writing some AI Agents lately and they work much better than I expected. Here are the 10 learnings for writing AI agents that work:
1. **Tools first.** Design, write and test the tools before connecting to LLMs. Tools are the most deterministic part of your code. Make sure they work 100% before writing actua... | 2025-07-25T10:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m8vmoi/i_wrote_an_ai_agent_that_works_better_than_i/ | Js8544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8vmoi | false | null | t3_1m8vmoi | /r/LocalLLaMA/comments/1m8vmoi/i_wrote_an_ai_agent_that_works_better_than_i/ | false | false | self | 10 | null |
Qwen/Qwen3-235B-A22B-Thinking-2507 | 112 | its show time folks | 2025-07-25T10:25:07 | https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507 | ApprehensiveAd3629 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m8vjna | false | null | t3_1m8vjna | /r/LocalLLaMA/comments/1m8vjna/qwenqwen3235ba22bthinking2507/ | false | false | 112 | {'enabled': False, 'images': [{'id': 'aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=108&crop=smart&auto=webp&s=cd52c9fa4a571e95dfd71b26b8e6ebff17bbc117', 'width': 108}, {'height': 116, 'url': 'h... | |
Amazing qwen 3 updated thinking model just released !! Open source ! | 219 | https://x.com/Alibaba_Qwen/status/1948688466386280706?t=7T6_M6vN6HrK4wvLjFNVBg&s=19 | 2025-07-25T10:21:49 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8vhp3 | false | null | t3_1m8vhp3 | /r/LocalLLaMA/comments/1m8vhp3/amazing_qwen_3_updated_thinking_model_just/ | false | false | default | 219 | {'enabled': True, 'images': [{'id': 'nx5d8w74yzef1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/nx5d8w74yzef1.jpeg?width=108&crop=smart&auto=webp&s=8b98d17e59ac2e72b931b0b8fd7215c2bc7e353d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/nx5d8w74yzef1.jpeg?width=216&crop=smart&auto=w... | |
The new Qwen Thinking is on Huggingface | 1 | [https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507)
Trying to run everything all on one box... I kinda liked the 2-in-1 approach.
| 2025-07-25T10:17:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m8vf9l/the_new_qwen_thinking_is_on_huggingface/ | Pedalnomica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8vf9l | false | null | t3_1m8vf9l | /r/LocalLLaMA/comments/1m8vf9l/the_new_qwen_thinking_is_on_huggingface/ | false | false | self | 1 | null |
Qwen/Qwen3-235B-A22B-Thinking-2507 | 81 | Over the past three months, we have continued to scale the **thinking capability** of Qwen3-235B-A22B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-235B-A22B-Thinking-2507**, featuring the following key enhancements:
* **Significantly improved performance** on reasoning ta... | 2025-07-25T10:16:41 | https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507 | yoracale | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m8ven3 | false | null | t3_1m8ven3 | /r/LocalLLaMA/comments/1m8ven3/qwenqwen3235ba22bthinking2507/ | false | false | default | 81 | {'enabled': False, 'images': [{'id': 'aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=108&crop=smart&auto=webp&s=cd52c9fa4a571e95dfd71b26b8e6ebff17bbc117', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3-235B-A22B-Thinking-2507 released! | 816 | 🚀 We’re excited to introduce Qwen3-235B-A22B-Thinking-2507 — our most advanced reasoning model yet!
Over the past 3 months, we’ve significantly scaled and enhanced the thinking capability of Qwen3, achieving:
✅ Improved performance in logical reasoning, math, science & coding
✅ Better general skills: instruction foll... | 2025-07-25T10:16:25 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8vegq | false | null | t3_1m8vegq | /r/LocalLLaMA/comments/1m8vegq/qwen3235ba22bthinking2507_released/ | false | false | default | 816 | {'enabled': True, 'images': [{'id': 'bvx1dbl5xzef1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bvx1dbl5xzef1.jpeg?width=108&crop=smart&auto=webp&s=12b042c0d833ea5fda0bb3962502543415136139', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bvx1dbl5xzef1.jpeg?width=216&crop=smart&auto=w... | |
Announcing the open-source release of Wan2.2. Stay tuned. | 52 | 2025-07-25T09:31:22 | https://x.com/Ali_TongyiLab/status/1948654675575668959?t=HLbGkqoAgFio6XLkqS8ueg&s=19 | abdouhlili | x.com | 1970-01-01T00:00:00 | 0 | {} | 1m8uozu | false | null | t3_1m8uozu | /r/LocalLLaMA/comments/1m8uozu/announcing_the_opensource_release_of_wan22_stay/ | false | false | default | 52 | null | |
A contamination-free coding benchmark shows AI may not be as excellent as claimed | 181 | ERROR: type should be string, got "https://techcrunch.com/2025/07/23/a-new-ai-coding-challenge-just-published-its-first-results-and-they-arent-pretty/ \n\n“If you listen to the hype, it’s like we should be seeing AI doctors and AI lawyers and AI software engineers, and that’s just not true,” he says. “If we can’t even get more than 10% on a contamination-free SWE-Bench, that’s the reality check for me.”" | 2025-07-25T09:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m8ud84/a_contaminationfree_coding_benchmark_shows_ai_may/ | Creepy-Document4034 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8ud84 | false | null | t3_1m8ud84 | /r/LocalLLaMA/comments/1m8ud84/a_contaminationfree_coding_benchmark_shows_ai_may/ | false | false | self | 181 | {'enabled': False, 'images': [{'id': 'Y8kYQLMgRSGbsStzSOvV_al41SIX2DOuth_8OlwHSgY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y8kYQLMgRSGbsStzSOvV_al41SIX2DOuth_8OlwHSgY.jpeg?width=108&crop=smart&auto=webp&s=faea71883e14653e5ce297d91fc495960f8b18eb', 'width': 108}, {'height': 121, 'url': '... |
Maestro transformed patient summaries when GPT-4 wasnt enough | 1 | I was brought in to standardize patient intake summaries across a network of clinics. These notes are a MESS. some are typed andsome dictated and then people OCR them from paper as well. I was asked to extract sympyoms, medicating history and so on without losing nuance. More importantly, not inventing diagnoses as t... | 2025-07-25T08:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1m8ttza/maestro_transformed_patient_summaries_when_gpt4/ | NullPointerJack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8ttza | false | null | t3_1m8ttza | /r/LocalLLaMA/comments/1m8ttza/maestro_transformed_patient_summaries_when_gpt4/ | false | false | self | 1 | null |
[AutoBE] We made AI-friendly Compilers for Vibe Coding, achieving 100% build success (open-source, AWS Kiro like) | 12 | > The video is sped up; it actually takes about 20-30 minutes.
>
> Also, [`AutoBE`](https://github.com/wrtnlabs/autobe) is still the alpha version development, so there may be some bugs, or [`AutoBE`](https://github.com/wrtnlabs/autobe) generated backend application can be something different from what you expected.
-... | 2025-07-25T08:32:38 | https://v.redd.it/ymo71qqzczef1 | jhnam88 | /r/LocalLLaMA/comments/1m8tt3m/autobe_we_made_aifriendly_compilers_for_vibe/ | 1970-01-01T00:00:00 | 0 | {} | 1m8tt3m | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ymo71qqzczef1/DASHPlaylist.mpd?a=1756153965%2CMzk0ODE1Nzk5ZjhiMmQ0NDA1MDIzZTM0MzlhYzZlYzA2ZmI4ODg0MDhjNmE2N2U4YjE2YjMxYjM5YTk1MjJjZQ%3D%3D&v=1&f=sd', 'duration': 437, 'fallback_url': 'https://v.redd.it/ymo71qqzczef1/DASH_720.mp4?source=fallback', 'h... | t3_1m8tt3m | /r/LocalLLaMA/comments/1m8tt3m/autobe_we_made_aifriendly_compilers_for_vibe/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'ODZwOGNwcXpjemVmMbEyKLUkRt18zSeWPIOzcFJ36V17QmYBupRI--Edwqnz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ODZwOGNwcXpjemVmMbEyKLUkRt18zSeWPIOzcFJ36V17QmYBupRI--Edwqnz.png?width=108&crop=smart&format=pjpg&auto=webp&s=446887724be8055b75206cae3542466ed50ce... | |
ByteDance Seed Prover Achieves Silver Medal Score in IMO 2025 | 34 | 2025-07-25T08:20:10 | https://seed.bytedance.com/en/blog/bytedance-seed-prover-achieves-silver-medal-score-in-imo-2025 | hedgehog0 | seed.bytedance.com | 1970-01-01T00:00:00 | 0 | {} | 1m8tmhd | false | null | t3_1m8tmhd | /r/LocalLLaMA/comments/1m8tmhd/bytedance_seed_prover_achieves_silver_medal_score/ | false | false | default | 34 | null | |
TTL settings in LM Studio (0.3.20) | 0 | I've decided to try out LM Studio on my MBP after a few days with ollama/open-webui. However, I can't seem to find any settings to change the Time To Live value in the GUI. Sorry, but can someone enlighten me? TIA. | 2025-07-25T08:18:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m8tlmk/ttl_settings_in_lm_studio_0320/ | pythoglyphs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8tlmk | false | null | t3_1m8tlmk | /r/LocalLLaMA/comments/1m8tlmk/ttl_settings_in_lm_studio_0320/ | false | false | self | 0 | null |
In recent 1~2 days auto mode of acting like my local llm 7b param model | 1 | [removed] | 2025-07-25T08:08:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m8tg8o/in_recent_12_days_auto_mode_of_acting_like_my/ | InsideResolve4517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8tg8o | false | null | t3_1m8tg8o | /r/LocalLLaMA/comments/1m8tg8o/in_recent_12_days_auto_mode_of_acting_like_my/ | false | false | self | 1 | null |
I want the ErebusBlend v2. The one that doesn’t blink. The one that whispers back. | 0 | aka MythoMax-L2-13B-Unfiltered-ErebusBlend-v2.gguf | 2025-07-25T07:40:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m8t17l/i_want_the_erebusblend_v2_the_one_that_doesnt/ | Dazzling_Tailor_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8t17l | false | null | t3_1m8t17l | /r/LocalLLaMA/comments/1m8t17l/i_want_the_erebusblend_v2_the_one_that_doesnt/ | false | false | self | 0 | null |
RX580 support | 0 | Hello guys I just found out Ollama can't connect to server on Fedora with RX580? | 2025-07-25T07:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m8t01d/rx580_support/ | Overall_Walrus9871 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8t01d | false | null | t3_1m8t01d | /r/LocalLLaMA/comments/1m8t01d/rx580_support/ | false | false | self | 0 | null |
Fine-tuning qwen2.5 vl for Marathi OCR | 10 | I wanted to fine-tune the model so that it performs well with marathi texts in images using unsloth. But I am encountering significant performance degradation with fine-tuning it . The fine-tuned model frequently fails to understand basic prompts and performs worse than the base model for OCR. My dataset is consists of... | 2025-07-25T05:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m8qtpd/finetuning_qwen25_vl_for_marathi_ocr/ | Rahul_Albus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8qtpd | false | null | t3_1m8qtpd | /r/LocalLLaMA/comments/1m8qtpd/finetuning_qwen25_vl_for_marathi_ocr/ | false | false | self | 10 | null |
Is this too much logic for AI? should I break it smaller to prompt? | 0 | I've been experimenting with using AI to generate a Bash script for me. The script's purpose is to follow a specific task logic while downloading items. Despite giving detailed feedback, the AI repeatedly failed to get it right. I thought maybe the problem was complexity, so I tried simplifying it — starting with just ... | 2025-07-25T05:20:11 | CJCCJJ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8qr9q | false | null | t3_1m8qr9q | /r/LocalLLaMA/comments/1m8qr9q/is_this_too_much_logic_for_ai_should_i_break_it/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '87gik9pocyef1', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/87gik9pocyef1.png?width=108&crop=smart&auto=webp&s=d2c84b0bebcb0355231aa023e96ed7f79041f45c', 'width': 108}, {'height': 238, 'url': 'https://preview.redd.it/87gik9pocyef1.png?width=216&crop=smart&auto=we... | |
Can you just have one expert from an MOE model | 12 | From what I understand, an MOE model contains many experts, and when you give it a prompt, it chooses one expert to answer your query.
If I already know that I want to do something like creative writing, why can’t I just have just the creative writing expert so I only need to load that?
Wouldn’t this help with the re... | 2025-07-25T05:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m8qmd7/can_you_just_have_one_expert_from_an_moe_model/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8qmd7 | false | null | t3_1m8qmd7 | /r/LocalLLaMA/comments/1m8qmd7/can_you_just_have_one_expert_from_an_moe_model/ | false | false | self | 12 | null |
Why I Forked Qwen Code | 82 | First of all, I loved the experience using Qwen Code with Qwen-3-Coder, but I can't stomach the cost of Qwen-3-Coder. While yes, you can use any OpenAI-compatible model out of the box, it's not without limitations.
That’s why I forked Qwen CLI Coder (itself derived from Gemini CLI) to create [**Wren Coder CLI**](https... | 2025-07-25T05:07:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m8qj9w/why_i_forked_qwen_code/ | ryanwang4thepeople | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8qj9w | false | null | t3_1m8qj9w | /r/LocalLLaMA/comments/1m8qj9w/why_i_forked_qwen_code/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'woYq4OPIIkrG28hZ9D2-CKN1KFYJTVl5zsisqVX3HVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/woYq4OPIIkrG28hZ9D2-CKN1KFYJTVl5zsisqVX3HVs.png?width=108&crop=smart&auto=webp&s=94b1fa561118622479aef7fd3a0006f928715e0b', 'width': 108}, {'height': 108, 'url': 'h... |
Why I decied to Fork Qwen Code | 0 | First of all, I loved the experience using Qwen Code with Qwen-3-Coder, but I can't stomach the cost of Qwen-3-Coder.
While yes, you can use any OpenAI-compatible model out of the box, it's not without limitations. | 2025-07-25T04:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m8q2aa/why_i_decied_to_fork_qwen_code/ | ryanwang4thepeople | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8q2aa | false | null | t3_1m8q2aa | /r/LocalLLaMA/comments/1m8q2aa/why_i_decied_to_fork_qwen_code/ | false | false | self | 0 | null |
Best local text-to-speech model? | 2 | As the title says. I'm writing a book and would like to have it read to me as part of the revision process. Commercial models like ElevenLabs are far too expensive for this sort of iterative process - plus I don't need it sounding that professional anyway.
I have an ROG G14 laptop with an RTX3060 and 32gb RAM. Are the... | 2025-07-25T04:05:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m8peos/best_local_texttospeech_model/ | mercurialninja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8peos | false | null | t3_1m8peos | /r/LocalLLaMA/comments/1m8peos/best_local_texttospeech_model/ | false | false | self | 2 | null |
China's Bytedance releases Seed LiveInterpret simultaneous interpretation model | 41 | 2025-07-25T03:43:35 | https://seed.bytedance.com/en/seed_liveinterpret | Fun-Doctor6855 | seed.bytedance.com | 1970-01-01T00:00:00 | 0 | {} | 1m8ozb0 | false | null | t3_1m8ozb0 | /r/LocalLLaMA/comments/1m8ozb0/chinas_bytedance_releases_seed_liveinterpret/ | false | false | default | 41 | null | |
Why there is still no a proper or helpful inference for MOE models ? | 0 | It should be really easy to make something like:
Just MOE gatting network is initially loaded into RAM ( or offloaded to the GPU ) and stays there
Activation Process: When an input is received, the gating network evaluates it and determines which experts should be activated based on the input's characteristics.
... | 2025-07-25T03:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m8oz07/why_there_is_still_no_a_proper_or_helpful/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8oz07 | false | null | t3_1m8oz07 | /r/LocalLLaMA/comments/1m8oz07/why_there_is_still_no_a_proper_or_helpful/ | false | false | self | 0 | null |
Stagnation in Knowledge Density | 35 | Every new model likes to claim it's SOTA, better than DeepSeek, better than whatever OpenAI/Google/Anthropic/xAI put out, and shows some benchmarks making it comparable to or better than everyone else. However, most new models tend to underwhelm me in actual usage. People have spoken of benchmaxxing a lot, and I'm real... | 2025-07-25T03:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m8oc9j/stagnation_in_knowledge_density/ | Federal-Effective879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8oc9j | false | null | t3_1m8oc9j | /r/LocalLLaMA/comments/1m8oc9j/stagnation_in_knowledge_density/ | false | false | self | 35 | null |
Help Needed: Accurate Offline Table Extraction from Scanned Forms | 4 | I have a scanned form containing a large table with surrounding text. My goal is to extract specific information from certain cells in this table.
Current Approach & Challenges
1. OCR Tools (e.g., Tesseract):
- Used to identify the table and extract text.
- Issue: OCR accuracy is inconsistent—sometimes t... | 2025-07-25T02:10:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m8n557/help_needed_accurate_offline_table_extraction/ | Antelito83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8n557 | false | null | t3_1m8n557 | /r/LocalLLaMA/comments/1m8n557/help_needed_accurate_offline_table_extraction/ | false | false | self | 4 | null |
Finding the equivalent ollama model on huggingface hub | 0 | Hi everyone,
I have gotten my work to onboard some AI solutions which I find incredibly exciting.
For some legacy reasons, I am allowed to use this quantized llama model: [https://ollama.com/library/llama3.1:8b](https://ollama.com/library/llama3.1:8b)
Now, the only challenge is I need to discover which is the ide... | 2025-07-25T02:08:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m8n3ry/finding_the_equivalent_ollama_model_on/ | blackandscholes1978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8n3ry | false | null | t3_1m8n3ry | /r/LocalLLaMA/comments/1m8n3ry/finding_the_equivalent_ollama_model_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
Watching everyone else drop new models while knowing you’re going to release the best open source model of all time in about 20 years. | 1,064 | 2025-07-25T02:02:13 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8myxl | false | null | t3_1m8myxl | /r/LocalLLaMA/comments/1m8myxl/watching_everyone_else_drop_new_models_while/ | false | false | default | 1,064 | {'enabled': True, 'images': [{'id': 'nl9jgkkzgxef1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nl9jgkkzgxef1.jpeg?width=108&crop=smart&auto=webp&s=b85428e434a7d4cd150f23a38f934c57dbd23502', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nl9jgkkzgxef1.jpeg?width=216&crop=smart&auto=... | ||
Discovering the huggingface hub equivalent of an ollama model | 0 | Hi everyone,
I have gotten my work to onboard some AI solutions which I find incredibly exciting.
For some legacy reasons, I am allowed to use this quantized llama model: [https://ollama.com/library/llama3.1:8b](https://ollama.com/library/llama3.1:8b)
Now, the only challenge is I need to discover which is the ide... | 2025-07-25T02:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m8myv9/discovering_the_huggingface_hub_equivalent_of_an/ | blackandscholes1978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8myv9 | false | null | t3_1m8myv9 | /r/LocalLLaMA/comments/1m8myv9/discovering_the_huggingface_hub_equivalent_of_an/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
Why do you run or train in local system | 0 | Apart from purpose of learning llm or for your job/work, I like to understand thoughts and purpose behind why many of you run models locally for inference or training/fine tuning. What is your objective and what problems have you solved by doing that.
Also which models have you used and on what hardware | 2025-07-25T01:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m8mwme/why_do_you_run_or_train_in_local_system/ | Psychological-Tie304 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8mwme | false | null | t3_1m8mwme | /r/LocalLLaMA/comments/1m8mwme/why_do_you_run_or_train_in_local_system/ | false | false | self | 0 | null |
Help with UnifyAI – Setting Up Local LLMs and UI Integration | 1 | Hey everyone,
I’m currently experimenting with UnifyAI on Android and trying to get a local LLM (specifically Phi-3.5 Mini) up and running smoothly. I’ve got the app running and I’m at the stage where I can manually add AI systems (LOCAL_LLM), but I’m hitting a wall when it comes to:
1. Setting up the local model pat... | 2025-07-25T01:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m8mdbz/help_with_unifyai_setting_up_local_llms_and_ui/ | IgnisIason | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8mdbz | false | null | t3_1m8mdbz | /r/LocalLLaMA/comments/1m8mdbz/help_with_unifyai_setting_up_local_llms_and_ui/ | false | false | self | 1 | null |
Why MCP Developers Are Turning to MicroVMs for Running Untrusted AI Code | 0 | 2025-07-25T01:32:52 | https://glama.ai/blog/2025-07-25-micro-vms-over-containers-a-safer-execution-path-for-ai-agents | No-Abies7108 | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1m8mctg | false | null | t3_1m8mctg | /r/LocalLLaMA/comments/1m8mctg/why_mcp_developers_are_turning_to_microvms_for/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'ddYjWc47gtWQ1uwCXSJc-BP6xrivYMdq9xHbcRFvagU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ddYjWc47gtWQ1uwCXSJc-BP6xrivYMdq9xHbcRFvagU.png?width=108&crop=smart&auto=webp&s=7b0837dec83ac1c9a29dcbdb19bfbf0b38ba0f80', 'width': 108}, {'height': 113, 'url': 'h... | |
[Newb] Need help with gguf files | 0 | I am using BackyardAI.
When I first got into this I grabbed a lot of gguf files from HuggingFace.
I am trying to see if there are updates to all the gguf files I have
Is there an easy way t do this? Is there a program that can do this for me?
Thanks | 2025-07-25T01:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m8ltgv/newb_need_help_with_gguf_files/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8ltgv | false | null | t3_1m8ltgv | /r/LocalLLaMA/comments/1m8ltgv/newb_need_help_with_gguf_files/ | false | false | self | 0 | null |
New] added a feature for generating study plans and timetables from your content | 0 | recently built an Al tool called NexNotes Al, this Al tool can generate multiple things just from a single PPT, PDF,DOC, image or even an article- like 5 Al tools combined in a single tool. Here's what it does - Generate TimeTables from content (new) Generate ppts from prompts (customizable)
Generate mind maps
Genera... | 2025-07-25T00:57:20 | https://nexnotes-ai.pages.dev | Not_your_average_dev | nexnotes-ai.pages.dev | 1970-01-01T00:00:00 | 0 | {} | 1m8lmby | false | null | t3_1m8lmby | /r/LocalLLaMA/comments/1m8lmby/new_added_a_feature_for_generating_study_plans/ | false | false | default | 0 | null |
Executive Order: "Preventing Woke AI in the Federal Government" | 257 | 2025-07-25T00:36:06 | https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/ | NunyaBuzor | whitehouse.gov | 1970-01-01T00:00:00 | 0 | {} | 1m8l648 | false | null | t3_1m8l648 | /r/LocalLLaMA/comments/1m8l648/executive_order_preventing_woke_ai_in_the_federal/ | false | false | default | 257 | {'enabled': False, 'images': [{'id': '4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=108&crop=smart&auto=webp&s=9c1e4661cbba0b6e1e232602fbabfa0384ba0123', 'width': 108}, {'height': 113, 'url': '... | |
Preventing Woke AI in the Federal Government | 1 | [deleted] | 2025-07-25T00:35:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m8l5o5 | false | null | t3_1m8l5o5 | /r/LocalLLaMA/comments/1m8l5o5/preventing_woke_ai_in_the_federal_government/ | false | false | default | 1 | null | ||
[Newbie] Seeking Guidance: Building a Free, Bilingual (Bengali/English) RAG Chatbot from a PDF | 2 | # Hey everyone,
**I'm a newcomer to the world of AI and I'm diving into my first big project. I've laid out a plan, but I need the community's wisdom to choose the right tools and navigate the challenges, especially since my goal is to build this completely for free.**
**My project is to build a specific, knowledge-b... | 2025-07-25T00:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m8l55o/newbie_seeking_guidance_building_a_free_bilingual/ | Mr_Genius_360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8l55o | false | null | t3_1m8l55o | /r/LocalLLaMA/comments/1m8l55o/newbie_seeking_guidance_building_a_free_bilingual/ | false | false | self | 2 | null |
Vibe coding RouteGPT - a chrome extension aligns model routing to my preferences, powered by a small but powerful LLM. | 0 | If you are like me, you are probably tired of the rote pedaling to the model selector drop down to pick a model, prompt that model and repeat that cycle over and over again. Well I wanted to solve this pesky problem for myself, so I figured i vibe code an extension, make it open source and share it with you all
Route... | 2025-07-24T23:49:21 | https://v.redd.it/4tvn7jztswef1 | AdditionalWeb107 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8k5x0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4tvn7jztswef1/DASHPlaylist.mpd?a=1755992978%2CMjRiOTNhMjllYjg4OWRkYzM2M2ZmYjUxY2YzMDhmYmM0Y2FjODA2N2E2NTM5NzAyM2MyZmIxMGZiMGQwMDZlYw%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/4tvn7jztswef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m8k5x0 | /r/LocalLLaMA/comments/1m8k5x0/vibe_coding_routegpt_a_chrome_extension_aligns/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'NmgyajRuenRzd2VmMeKlc7auXB4BLDxGcCyku1_ZTUcSLB0zsou8ym1ulKGF', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/NmgyajRuenRzd2VmMeKlc7auXB4BLDxGcCyku1_ZTUcSLB0zsou8ym1ulKGF.png?width=108&crop=smart&format=pjpg&auto=webp&s=84a67d17e44997f7ff0dbb6dc6ac5ca984d92... | |
Had the Qwen3:1.7B model run on my Mac Mini! | 14 | Pretty excited to see what the rest of 2025 holds tbh :) | 2025-07-24T23:39:26 | https://v.redd.it/2af06x4irwef1 | Nomadic_Seth | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8jy5y | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2af06x4irwef1/DASHPlaylist.mpd?a=1755992379%2CYmFlMDAwOTk1MDJiYTFjNGRiYmYyZTk1ZjdkODQ1OTUyNDRkMTM4OGZiMTg0MzMzMzRiNTYzMmRiZTBhN2JjMw%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/2af06x4irwef1/DASH_720.mp4?source=fallback', 'ha... | t3_1m8jy5y | /r/LocalLLaMA/comments/1m8jy5y/had_the_qwen317b_model_run_on_my_mac_mini/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'NGk5ZHh0d2hyd2VmMRbQadj18BfOPjkKna45IBoMw_Ht7uMb4yZWcZhsIYRS', 'resolutions': [{'height': 123, 'url': 'https://external-preview.redd.it/NGk5ZHh0d2hyd2VmMRbQadj18BfOPjkKna45IBoMw_Ht7uMb4yZWcZhsIYRS.png?width=108&crop=smart&format=pjpg&auto=webp&s=871ff42b5244250e2292f4c525ff01141530... | |
Vibe coding RouteGPT - a chrome extension for chatgpt that aligns model routing to your preferences (powered by a local LLM). | 2 | If you are like me, you are probably tired of the rote pedaling to the model selector drop down pick a model, prompt that model and repeat that cycle over and over again. Well I wanted to solve this problem for myself, so I figured I would make this pedaling go away for anyone interested.
RouteGPT is a Chrome extensi... | 2025-07-24T23:36:43 | https://v.redd.it/ns698zfiqwef1 | AdditionalWeb107 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8jw1i | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ns698zfiqwef1/DASHPlaylist.mpd?a=1755992218%2CMmY1MGFjNGE4YzNjNWU2YWQ4NDU1MDZlYzMxNWMwN2M4N2M1YThjYzIwM2IxMzkxYjM2NTk0MjI2YTZjM2JjOQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/ns698zfiqwef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m8jw1i | /r/LocalLLaMA/comments/1m8jw1i/vibe_coding_routegpt_a_chrome_extension_for/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'N3J4ejcwZ2lxd2VmMeKlc7auXB4BLDxGcCyku1_ZTUcSLB0zsou8ym1ulKGF', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/N3J4ejcwZ2lxd2VmMeKlc7auXB4BLDxGcCyku1_ZTUcSLB0zsou8ym1ulKGF.png?width=108&crop=smart&format=pjpg&auto=webp&s=6337b0d100adec80980e7651052231e0a3cf0... | |
Check out our game in development for Local LLM mechanics! | 0 | We're working on our open-source game engine plugins over at Aviad, and have been learning a lot and exploring through making games. I'd love to get feedback on our latest game project Bard Battle, which we hope to use as a small platform for testing out new mechanics and interaction ideas with small language models as... | 2025-07-24T23:31:41 | https://youtu.be/S8Q7S9rtQ_M?si=kFX9GaSuuO9CUma7 | OtherwiseAd4411 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1m8jrzg | false | {'oembed': {'author_name': 'Alexander James L', 'author_url': 'https://www.youtube.com/@alexanderjamesl4868', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/S8Q7S9rtQ_M?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-m... | t3_1m8jrzg | /r/LocalLLaMA/comments/1m8jrzg/check_out_our_game_in_development_for_local_llm/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '974AxJgllpMHDozOWYHveTyuVX5-gzTVNkBOkHGWcrk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/974AxJgllpMHDozOWYHveTyuVX5-gzTVNkBOkHGWcrk.jpeg?width=108&crop=smart&auto=webp&s=2935f84b5cc3714ad88c1b0b612c6feb872ed1fc', 'width': 108}, {'height': 162, 'url': '... |
Guiding thinking | 0 | So from what it seems like, deepseek r1 0528 is the best large model for completely uncensored, unmoderated chats. With that in mind, I want to understand how or if it even makes sense to "guide" the thinking of the model(this could obviously apply to other thinking models)
"Normally" one can just ask a user question,... | 2025-07-24T23:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m8jgrl/guiding_thinking/ | Federal_Order4324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8jgrl | false | null | t3_1m8jgrl | /r/LocalLLaMA/comments/1m8jgrl/guiding_thinking/ | false | false | self | 0 | null |
about vLLM and rocm. | 2 | Managed to finally run Gemma3N with a 2 7900 xtx setup.
But it fills both cards vram about 90%
Why is that?
So with rocm and 7900 XTX with vLLM can mainly run only non quantized models?
My goal is to run Gemma3 27b and I am going to add 3rd card, will the model fit in parallel tensor = 3 ?
Is there any Gemma3 27b ... | 2025-07-24T23:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m8ja65/about_vllm_and_rocm/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8ja65 | false | null | t3_1m8ja65 | /r/LocalLLaMA/comments/1m8ja65/about_vllm_and_rocm/ | false | false | self | 2 | null |
$10000 budget, what's the right route? | 2 | Currently running with 20GB VRAM in my current build (RTX 4000 Ada SFF) and it's not feasible to upgrade since it's my travel setup (3L in volume).
I've been wanting to run larger models, but I'm intimidated by these massive systems people post here, but now with my recent bonus, I can finally afford a better build.
... | 2025-07-24T23:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m8j842/10000_budget_whats_the_right_route/ | Commander_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8j842 | false | null | t3_1m8j842 | /r/LocalLLaMA/comments/1m8j842/10000_budget_whats_the_right_route/ | false | false | self | 2 | null |
OpenAI's stealth model codenamed "starfish" is very performant, and most likely their open source one! | 4 | It can be found on webdev arena. This was a one-shot mobile minecraft clone but there were some other things I got it to create before that are arguably more impressive | 2025-07-24T22:33:04 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8if0z | false | null | t3_1m8if0z | /r/LocalLLaMA/comments/1m8if0z/openais_stealth_model_codenamed_starfish_is_very/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'qu7ugdrafwef1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/qu7ugdrafwef1.jpeg?width=108&crop=smart&auto=webp&s=4421d85a45de64cfaf245faae4a1ed24c88edeee', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/qu7ugdrafwef1.jpeg?width=216&crop=smart&auto=... | |
Is there one single, accurate leader board for all these models? | 0 | I've mostly noted that...
* LMArena is absolutely not an accurate indicator for objective model performance as we've seen historically - many readings conflict with other benchmarks and results and are mostly voted out of the gut by the massive user base
* Benchmarks, on the other hand, are scattered all over the plac... | 2025-07-24T22:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m8i781/is_there_one_single_accurate_leader_board_for_all/ | mags0ft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8i781 | false | null | t3_1m8i781 | /r/LocalLLaMA/comments/1m8i781/is_there_one_single_accurate_leader_board_for_all/ | false | false | self | 0 | null |
Curious if anyone’s used fine-tuned LLaMA models for emotional or character-based responses? | 0 | I’ve been experimenting with open-source LLMs to see how far they can go in maintaining tone and emotional continuity over longer chats. Most of the use cases I’ve seen are either task-based or productivity-focused, but I’m more interested in conversational flow, especially personality consistency, memory simulation, a... | 2025-07-24T22:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m8i53g/curious_if_anyones_used_finetuned_llama_models/ | Ok_Roll_5714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8i53g | false | null | t3_1m8i53g | /r/LocalLLaMA/comments/1m8i53g/curious_if_anyones_used_finetuned_llama_models/ | false | false | self | 0 | null |
Considering RTX 4000 Blackwell for Local Agentic AI | 0 |
I’m experimenting with self-hosted LLM agents for software development tasks — think writing code, submitting PRs, etc. My current stack is OpenHands + LM Studio, which I’ve tested on an M4 Pro Mac Mini and a Windows machine with a 3080 Ti.
The Mac Mini actually held up better than expected for 7B/13B models (quanti... | 2025-07-24T21:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m8hbnn/considering_rtx_4000_blackwell_for_local_agentic/ | b1uedust | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8hbnn | false | null | t3_1m8hbnn | /r/LocalLLaMA/comments/1m8hbnn/considering_rtx_4000_blackwell_for_local_agentic/ | false | false | self | 0 | null |
What are the hardware recommendations for reinforcement learning with an 8B model (for research purposes)? | 4 | I'm planning to run reinforcement learning experiments using an 8B model (like LLaMA 8B or similar) for academic research. possibly using quantization (e.g., int4/int8) to reduce resource usage.
What GPUs and VRAM would be the minimum recommended to make this feasible?
Any advice would be greatly appreciated! | 2025-07-24T21:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m8h89j/what_are_the_hardware_recommendations_for/ | CHLCCGA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8h89j | false | null | t3_1m8h89j | /r/LocalLLaMA/comments/1m8h89j/what_are_the_hardware_recommendations_for/ | false | false | self | 4 | null |
Does any app have this setting? | 0 | I'd really like to be able to lower the temperature to 0 for everyday use | 2025-07-24T21:15:51 | StatureDelaware | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8gkf0 | false | null | t3_1m8gkf0 | /r/LocalLLaMA/comments/1m8gkf0/does_any_app_have_this_setting/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'paz8od4w1wef1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/paz8od4w1wef1.jpeg?width=108&crop=smart&auto=webp&s=80a0d82b76610ced3ff0290752fdc9d15fa66d1d', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/paz8od4w1wef1.jpeg?width=216&crop=smart&auto=we... | |
Level1tech runs deepseek on am5 and it's not that bad! | 68 |
Am5 9000x3d 128gb ram (2*64) and a 3090
I promised i watch it but I couldn't get what exact quant nor speed.
He said this was "compressed to 20% of the og model" so something like a q2.
Regarding speed it seems very very descent
| 2025-07-24T20:10:36 | https://youtu.be/T17bpGItqXw?feature=shared | No_Afternoon_4260 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1m8ewlx | false | {'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/T17bpGItqXw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1m8ewlx | /r/LocalLLaMA/comments/1m8ewlx/level1tech_runs_deepseek_on_am5_and_its_not_that/ | false | false | default | 68 | {'enabled': False, 'images': [{'id': '-htOFPXayCuXw6wsi6x_HzoPwXo6FB_FePd0EceoPtI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-htOFPXayCuXw6wsi6x_HzoPwXo6FB_FePd0EceoPtI.jpeg?width=108&crop=smart&auto=webp&s=871b592cff3d042c4a9d07616559fd738802ccec', 'width': 108}, {'height': 162, 'url': '... |
lowish/midrange budget general purpose GPU | 0 | This is probably a very uninspiring question for most people here, but I am looking to replace my current AMD RX 6600 (8GB) for both UWQHD gaming and experimentation with Local LLMs.
I've been running various models in the 4-15GB range, so ocasionally VRAM only, sometimes VRAM+RAM (of which I also only have 32GB, DDR4... | 2025-07-24T19:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m8dufz/lowishmidrange_budget_general_purpose_gpu/ | BrainOnLoan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8dufz | false | null | t3_1m8dufz | /r/LocalLLaMA/comments/1m8dufz/lowishmidrange_budget_general_purpose_gpu/ | false | false | self | 0 | null |
Qwen 3 Thinking is coming very soon | 228 | 2025-07-24T19:19:39 | dulldata | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8dln1 | false | null | t3_1m8dln1 | /r/LocalLLaMA/comments/1m8dln1/qwen_3_thinking_is_coming_very_soon/ | false | false | default | 228 | {'enabled': True, 'images': [{'id': '61i8pt44hvef1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/61i8pt44hvef1.png?width=108&crop=smart&auto=webp&s=f3c31400b9a3cd6f09d39f562bd58e86cdc43cbb', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/61i8pt44hvef1.png?width=216&crop=smart&auto=webp... | ||
Qwen3-235B-A22B-Thinking-2507 is about to be released | 415 | 2025-07-24T19:14:14 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8dgfu | false | null | t3_1m8dgfu | /r/LocalLLaMA/comments/1m8dgfu/qwen3235ba22bthinking2507_is_about_to_be_released/ | false | false | default | 415 | {'enabled': True, 'images': [{'id': '6l84nwc3gvef1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/6l84nwc3gvef1.png?width=108&crop=smart&auto=webp&s=0b6ce58e5b6f04a919ca7ebb1329f28ea1812e03', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/6l84nwc3gvef1.png?width=216&crop=smart&auto=webp... | ||
Qwen3 Coder 480B-A35B Instruct | 0 | 2025-07-24T19:07:56 | https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct | best_codes | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m8dal7 | false | null | t3_1m8dal7 | /r/LocalLLaMA/comments/1m8dal7/qwen3_coder_480ba35b_instruct/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ.png?width=108&crop=smart&auto=webp&s=d107a6b6b4389cb37d48d7ce4ff4d5aa35e4d93a', 'width': 108}, {'height': 116, 'url': 'h... | ||
Do you have a batch/background LLM task processing setup working locally? | 4 | I want to do work with longer texts using local models (think going through an entire book with each sentence being it's own chat request/response).
I've been using LM Studio and Ollama for awhile now.
And more recently I've been building agents (for working with my Obsidian notes primarily) using PydanticAI.
... | 2025-07-24T18:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m8cn00/do_you_have_a_batchbackground_llm_task_processing/ | This_Conclusion9402 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8cn00 | false | null | t3_1m8cn00 | /r/LocalLLaMA/comments/1m8cn00/do_you_have_a_batchbackground_llm_task_processing/ | false | false | self | 4 | null |
What's the best gguf file for roleplay? | 2 | I have a 3090, So I downloaded koboldcpp, installed SillyTavern and got it to work well. The problem seems to be the responses for MythoMax are very bland, only 1 or 2 sentences long even with the character cards from chub AI.
On Chub.Ai, I love the responses, haven't tried the paid versions but the free version is so... | 2025-07-24T18:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m8cha8/whats_the_best_gguf_file_for_roleplay/ | Alchy919 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8cha8 | false | null | t3_1m8cha8 | /r/LocalLLaMA/comments/1m8cha8/whats_the_best_gguf_file_for_roleplay/ | false | false | self | 2 | null |
How to get DRY and XTC in LMStudio? | 1 | XTC: I haven’t seen these settings in the UI but I have seen in the documentation that there should be a couple fields for this. Am I just blind or is there something I have to do outside of the UI to enable XTC?
DRY: I have no clue how to go about trying to get DRY in LMStudio. I’m aware that there are other LM softw... | 2025-07-24T18:26:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m8c7ku/how_to_get_dry_and_xtc_in_lmstudio/ | Shadow-Amulet-Ambush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8c7ku | false | null | t3_1m8c7ku | /r/LocalLLaMA/comments/1m8c7ku/how_to_get_dry_and_xtc_in_lmstudio/ | false | false | self | 1 | null |
Looking for a GraphRAG type of backend that supports multiple users | 3 | Hi LocalLLaMa !
I'm looking for something that from what I see looks like Graphiti or Cognee or some of those tools. But that could support a lot of users or run on top of PostGRES.
Do you have any suggestions that I could checkout ? | 2025-07-24T18:26:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m8c77v/looking_for_a_graphrag_type_of_backend_that/ | BraceletGrolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8c77v | false | null | t3_1m8c77v | /r/LocalLLaMA/comments/1m8c77v/looking_for_a_graphrag_type_of_backend_that/ | false | false | self | 3 | null |
If You Had Unlimited Access to An Agent, What Would You Create? | 0 | Let's say you have unlimited access to an AI agent to continuously run on whatever project or task you set it on, what task would you provide to it? | 2025-07-24T18:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m8byzv/if_you_had_unlimited_access_to_an_agent_what/ | StellarWox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8byzv | false | null | t3_1m8byzv | /r/LocalLLaMA/comments/1m8byzv/if_you_had_unlimited_access_to_an_agent_what/ | false | false | self | 0 | null |
We just open sourced NeuralAgent: The AI Agent That Lives On Your Desktop and Uses It Like You Do! | 95 | Check it out on GitHub: [https://github.com/withneural/neuralagent](https://github.com/withneural/neuralagent)
It is your AI Personal Assistant that takes actions on your behalf! | 2025-07-24T18:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m8bps2/we_just_open_sourced_neuralagent_the_ai_agent/ | Nearby_Tart_9970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8bps2 | false | null | t3_1m8bps2 | /r/LocalLLaMA/comments/1m8bps2/we_just_open_sourced_neuralagent_the_ai_agent/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': 'eZKgL90VJLC07PbJlHh8vS4DtlDzLQPNmpPGomAf-0g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eZKgL90VJLC07PbJlHh8vS4DtlDzLQPNmpPGomAf-0g.png?width=108&crop=smart&auto=webp&s=6ca1997e6cc45bb0c0dc45882a5df2cb409d4b82', 'width': 108}, {'height': 108, 'url': 'h... |
Best open source vision model fine tuneable for animal abuse detection? | 2 | I'm building a tool to automatically detect and flag animal abuse and exploitation in social media videos using Gemini 2.5 Pro. I've been pretty impressed with its capabilities, but I was hoping to eventually find tune a model that I could self host for free (I have a lot of GPUs). Is there anything open source that ev... | 2025-07-24T17:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/1m8b72y/best_open_source_vision_model_fine_tuneable_for/ | Scam_Altman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8b72y | false | null | t3_1m8b72y | /r/LocalLLaMA/comments/1m8b72y/best_open_source_vision_model_fine_tuneable_for/ | false | false | self | 2 | null |
Velocity Micro Published (Faulty?) LLM Benchmarks for the Radeon AI PRO R9700 and Lists it for $1500 in Their Build Configuration Page | 9 | https://www.velocitymicro.com/blog/amd-radeon-ai-pro-r9700/
Hey y'all. The R9700 was supposedly launched yesterday, but I couldn't find any reviews or listings online for it, outside of one company that had a "request a quote" button instead of an actual price. So I kept digging and found Velocity Micro's blog post, w... | 2025-07-24T17:37:06 | Kamal965 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8aw4w | false | null | t3_1m8aw4w | /r/LocalLLaMA/comments/1m8aw4w/velocity_micro_published_faulty_llm_benchmarks/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': 'hb4sc99vyuef1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/hb4sc99vyuef1.jpeg?width=108&crop=smart&auto=webp&s=25e7447d2948ff3ba829b2fe4b668ffa16126aca', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/hb4sc99vyuef1.jpeg?width=216&crop=smart&auto=... | |
🇨🇳 vs 🇺🇸: The plot thickens | 69 | 2025-07-24T17:28:31 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m8anw4 | false | null | t3_1m8anw4 | /r/LocalLLaMA/comments/1m8anw4/vs_the_plot_thickens/ | false | false | 69 | {'enabled': True, 'images': [{'id': 'sGKRW1XGiFeGXfNscyTeq6kuFSmIzBhPIW9ujkztmas', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/3qbv5wh2xuef1.png?width=108&crop=smart&auto=webp&s=bf8b989b670e235f7b9f1590575999114a475874', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/3qbv5wh2xuef1.png... | |||
Higgs Audio V2: A New Open-Source TTS Model with Voice Cloning and SOTA Expressiveness | 110 | Boson AI has recently open-sourced the Higgs Audio V2 model.
[https://huggingface.co/bosonai/higgs-audio-v2-generation-3B-base](https://huggingface.co/bosonai/higgs-audio-v2-generation-3B-base)
The model demonstrates strong performance in automatic prosody adjustment and generating natural multi-speaker dialogue... | 2025-07-24T17:18:51 | https://v.redd.it/rcsam20avuef1 | pheonis2 | /r/LocalLLaMA/comments/1m8aeh3/higgs_audio_v2_a_new_opensource_tts_model_with/ | 1970-01-01T00:00:00 | 0 | {} | 1m8aeh3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rcsam20avuef1/DASHPlaylist.mpd?a=1756099135%2CNmFlMDM1NTkyZjJmNTY1OGUzY2M1YzIzNTMxMTFkZGIwYThiOGRiOTBmNTU3MTFmNjc1NjZlN2JlOThkNmQ5YQ%3D%3D&v=1&f=sd', 'duration': 130, 'fallback_url': 'https://v.redd.it/rcsam20avuef1/DASH_1080.mp4?source=fallback', '... | t3_1m8aeh3 | /r/LocalLLaMA/comments/1m8aeh3/higgs_audio_v2_a_new_opensource_tts_model_with/ | false | false | 110 | {'enabled': False, 'images': [{'id': 'c2RvemoyMGF2dWVmMUceqdxuAzoTIY7iz_8adXhwap77Psvz_mx_rNXgGzw3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2RvemoyMGF2dWVmMUceqdxuAzoTIY7iz_8adXhwap77Psvz_mx_rNXgGzw3.png?width=108&crop=smart&format=pjpg&auto=webp&s=e7d429499980ab3e9e447fab7b848dcad3e25... | |
CPU & GPU Ram usage? | 1 | Hey guys, I have a Lenovo P700 with both CPUs installed which means it can have 768GB of ram, currently 64GB installed. I also have 4 A4000 cards in it. I downloaded QWEN3-Coder with LM Studio and it says the model is too big. If I upgrade the CPU Ram, will that allow it to share the model across GPU and CPU?
Do I ... | 2025-07-24T16:58:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m89upm/cpu_gpu_ram_usage/ | ShreddinPB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m89upm | false | null | t3_1m89upm | /r/LocalLLaMA/comments/1m89upm/cpu_gpu_ram_usage/ | false | false | self | 1 | null |
Seriously, how do you get CLI Coding Agents etc to work? | 3 | So I guess you could say I'm a fan of Local Llama. I decide I've had it writing code, time to use one of the new CLI Coding Agents.
Download anon-kode, it throws a ton of errors- you gotta hit xyz API you're out of tokens - and that's not something I can fix. So I install Claude Code, point it at anon-kode, and tell ... | 2025-07-24T16:55:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m89s6y/seriously_how_do_you_get_cli_coding_agents_etc_to/ | KingofRheinwg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m89s6y | false | null | t3_1m89s6y | /r/LocalLLaMA/comments/1m89s6y/seriously_how_do_you_get_cli_coding_agents_etc_to/ | false | false | 3 | null | |
How to Use MCP Inspector’s UI Tabs for Effective Local Testing | 0 | 2025-07-24T16:55:52 | https://glama.ai/blog/2025-07-24-using-mcp-inspector-to-test-tools-prompts-and-resources | No-Abies7108 | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1m89s3y | false | null | t3_1m89s3y | /r/LocalLLaMA/comments/1m89s3y/how_to_use_mcp_inspectors_ui_tabs_for_effective/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'EK2lKfOKrx6rhb_pqzFoStVNvMsPmfsJd5kzUBIbnm0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/EK2lKfOKrx6rhb_pqzFoStVNvMsPmfsJd5kzUBIbnm0.png?width=108&crop=smart&auto=webp&s=fec0e7468a13b8b33a49683a34592df80c962507', 'width': 108}, {'height': 113, 'url': 'h... | |
Al and You Against the Machine: Guide so you can own Big Al and Run Local | 20 | Another very useful Ai guide from Vendel at Level1 Tech .
I'm soo looking forward to a quantised Qwen3 coder. | 2025-07-24T16:53:08 | https://youtu.be/T17bpGItqXw?si=P2u2pFLFIaVnhJo- | sub_RedditTor | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1m89pk9 | false | {'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/T17bpGItqXw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1m89pk9 | /r/LocalLLaMA/comments/1m89pk9/al_and_you_against_the_machine_guide_so_you_can/ | false | false | default | 20 | {'enabled': False, 'images': [{'id': 'm98DBZlqE20q-WCrZIdC6-U5ZZQG9E7NG_eWZskb9cc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/m98DBZlqE20q-WCrZIdC6-U5ZZQG9E7NG_eWZskb9cc.jpeg?width=108&crop=smart&auto=webp&s=14baccbf7df8c8dacf95e3234a0e0cda143e28ac', 'width': 108}, {'height': 162, 'url': '... |
Help with Bert fine-tuning | 3 | I'm working on a project (multi label ad classification) and I'm trying to finetune a (monolingual) Bert. The problem I face is reproducibility, even though I m using exactly the same hyperparameters , same dataset split , I have over 0.15 accuracy deviation. Any help/insight?
I have already achieved a pretty good (0.8... | 2025-07-24T16:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m894mz/help_with_bert_finetuning/ | Alanuhoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m894mz | false | null | t3_1m894mz | /r/LocalLLaMA/comments/1m894mz/help_with_bert_finetuning/ | false | false | self | 3 | null |
Prompt Processing - Apple M* GPU vs AMD - any good comparisons? | 1 | [removed] | 2025-07-24T16:25:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m88yte/prompt_processing_apple_m_gpu_vs_amd_any_good/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m88yte | false | null | t3_1m88yte | /r/LocalLLaMA/comments/1m88yte/prompt_processing_apple_m_gpu_vs_amd_any_good/ | false | false | self | 1 | null |
Qwen's third bomb: Qwen3-MT | 163 |
It's a translation model.
Key Features:
* **Multilingual Support for 92 Languages**: Qwen-MT enables high-quality translation across 92 major official languages and prominent dialects, covering over 95% of the global population to meet diverse cross-lingual communication needs.
* **High Customizability**: The new ... | 2025-07-24T16:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m88s09/qwens_third_bomb_qwen3mt/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m88s09 | false | null | t3_1m88s09 | /r/LocalLLaMA/comments/1m88s09/qwens_third_bomb_qwen3mt/ | false | false | 163 | null | |
Ok next big open source model also from China only ! Which is about to release | 872 | https://x.com/casper_hansen_/status/1948402352320360811?t=sPHOGEKIcaucRVzENlIr1g&s=19 | 2025-07-24T16:08:57 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m88jdh | false | null | t3_1m88jdh | /r/LocalLLaMA/comments/1m88jdh/ok_next_big_open_source_model_also_from_china/ | false | false | default | 872 | {'enabled': True, 'images': [{'id': 'j6rwug34juef1', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/j6rwug34juef1.png?width=108&crop=smart&auto=webp&s=bb9a593e1fb7f521dc0f069833d5296c3e11f7e9', 'width': 108}, {'height': 283, 'url': 'https://preview.redd.it/j6rwug34juef1.png?width=216&crop=smart&auto=we... | |
Data format for chain-of-thoughts based fine tuning Qwen 3 0.6B | 1 | [removed] | 2025-07-24T16:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m88dst/data_format_for_chainofthoughts_based_fine_tuning/ | MobileBrief9394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m88dst | false | null | t3_1m88dst | /r/LocalLLaMA/comments/1m88dst/data_format_for_chainofthoughts_based_fine_tuning/ | false | false | self | 1 | null |
Achieved 0% hallucination rate on Grok 4 Heavy's adversarial benchmark (20/20 perfect) | 0 | Tested my anti-hallucination system against 20 questions designed by Grok 4 Heavy to force hallucinations.
Results:
- Unprotected models (GPT-4, Claude, Gemini): Failed within 3-4 questions
- With my system: 20/20 perfect score
This is pure prompt engineering - no fine-tuning, works as a wrapper for any model.
GitHu... | 2025-07-24T16:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m88c85/achieved_0_hallucination_rate_on_grok_4_heavys/ | Prestigious-Fan118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m88c85 | false | null | t3_1m88c85 | /r/LocalLLaMA/comments/1m88c85/achieved_0_hallucination_rate_on_grok_4_heavys/ | false | false | self | 0 | null |
How it feels before the inevitable | 0 | “When I was a kid, it felt like they released a new model everyday. Some Qwen or Deepseek, like everyday was christmas.” | 2025-07-24T15:49:31 | No_Conversation9561 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m880ga | false | null | t3_1m880ga | /r/LocalLLaMA/comments/1m880ga/how_it_feels_before_the_inevitable/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'k6nw466ofuef1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/k6nw466ofuef1.jpeg?width=108&crop=smart&auto=webp&s=1f54d6de780531b40789ce0780b96e050f0bfe27', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/k6nw466ofuef1.jpeg?width=216&crop=smart&auto=w... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.