title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Multi-Code-Agent Orchestration Extension | 1 | [removed] | 2025-07-15T05:19:08 | https://github.com/jdbridgeman/multi-agent-orchestration-extension | Anthemic-AI | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m09aya | false | null | t3_1m09aya | /r/LocalLLaMA/comments/1m09aya/multicodeagent_orchestration_extension/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'a55dz50m8DYAJreHDs0mLXPSK0k7PdJDge1TNtrqVCQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a55dz50m8DYAJreHDs0mLXPSK0k7PdJDge1TNtrqVCQ.png?width=108&crop=smart&auto=webp&s=a0f278668e1259d7873ba1d3fe1677bfef2e819c', 'width': 108}, {'height': 108, 'url': 'h... | |
Are there any models that can upmix stereo into surround!!! | 3 | So, i have an older Pioneer VSX-529 and it definitely doesn't do newer DTS or Dolby encoding, but i do use my desktop pc instead and also happen to have a pretty powerful RTX 4080s, question is do these upmixing in real time models exist, to convert stereo to surround noise from youtube, spotify, any media. I'm looking... | 2025-07-15T04:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m08bvp/are_there_any_models_that_can_upmix_stereo_into/ | Puzzleheaded_Soup847 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m08bvp | false | null | t3_1m08bvp | /r/LocalLLaMA/comments/1m08bvp/are_there_any_models_that_can_upmix_stereo_into/ | false | false | self | 3 | null |
How to increase character limit in TTS? | 1 | Using chatterbox locally and its limited to 300 characters :/
Is there any way to increase the character limit?
Someone mentioned someone had created increased character limit in chatterbox: [https://github.com/RemmyLee/chattered/](https://github.com/RemmyLee/chattered/) but I'm not if there is mailcious codes despi... | 2025-07-15T04:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m086sk/how_to_increase_character_limit_in_tts/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m086sk | false | null | t3_1m086sk | /r/LocalLLaMA/comments/1m086sk/how_to_increase_character_limit_in_tts/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GGUj68vho4nJjmwxwoFS4_rpmwwavOgcI4tkI1mk2_4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GGUj68vho4nJjmwxwoFS4_rpmwwavOgcI4tkI1mk2_4.png?width=108&crop=smart&auto=webp&s=d64855289ab354b03dc64faed3d37e46b6cc0721', 'width': 108}, {'height': 108, 'url': 'h... |
gemini 2.5 pro bug | 0 | 2025-07-15T04:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m084lw/gemini_25_pro_bug/ | JuggernautUpbeat3547 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m084lw | false | null | t3_1m084lw | /r/LocalLLaMA/comments/1m084lw/gemini_25_pro_bug/ | false | false | 0 | null | ||
Test MNN Chat for Android | 14 | We are alpha test the google play version of MNN Chat. looking for feedback from users like you.
1. First, join our Google Group:[MNN Chat Testers](https://groups.google.com/g/mnn-chat/)
2. Then, download the app from the Play Store:[Get MNN Chat](https://play.google.com/store/apps/details?id=com.alibaba.mnnllm.androi... | 2025-07-15T04:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m081hm/test_mnn_chat_for_android/ | Juude89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m081hm | false | null | t3_1m081hm | /r/LocalLLaMA/comments/1m081hm/test_mnn_chat_for_android/ | false | false | 14 | null | |
[XTTS v2] ¿Por qué algunas voces suenan bien y otras fallan al entrenar? | 0 | \*\*\[XTTS v2\] ¿Por qué algunas voces suenan bien y otras fallan al entrenar?\*\*
Hola, estoy experimentando con XTTS v2 (Coqui) para clonar voces personalizadas.
He notado lo siguiente:
\- Si entreno con mi propia voz grabada, el modelo suena bien.
\- Si uso audios de ejemplo del modelo (como \`female.wav\`... | 2025-07-15T03:57:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m07tkl/xtts_v2_por_qué_algunas_voces_suenan_bien_y_otras/ | Blitzo_45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m07tkl | false | null | t3_1m07tkl | /r/LocalLLaMA/comments/1m07tkl/xtts_v2_por_qué_algunas_voces_suenan_bien_y_otras/ | false | false | self | 0 | null |
Gaetone addressed multiple issues effectively ! | 1 | [removed] | 2025-07-15T03:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m07hda/gaetone_addressed_multiple_issues_effectively/ | Popular_Army_9040 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m07hda | false | null | t3_1m07hda | /r/LocalLLaMA/comments/1m07hda/gaetone_addressed_multiple_issues_effectively/ | false | false | self | 1 | null |
Gaetone addressed multiple issues effectively ! | 1 | [removed] | 2025-07-15T03:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m07epd/gaetone_addressed_multiple_issues_effectively/ | Popular_Army_9040 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m07epd | false | null | t3_1m07epd | /r/LocalLLaMA/comments/1m07epd/gaetone_addressed_multiple_issues_effectively/ | false | false | self | 1 | null |
Non-reasoning models adopting reasoning behavior from previous messages | 19 | I've noticed that if you begin a chat with a reasoning model like Qwen 3 and then in subsequent messages switch to a different non-reasoning model (such as Gemma 3 12b or Devstral 2507) the non-reasoning model will sometimes also generate reasoning tokens and respond with a final answer afterwards like it was trained t... | 2025-07-15T02:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m06nhe/nonreasoning_models_adopting_reasoning_behavior/ | Thedudely1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m06nhe | false | null | t3_1m06nhe | /r/LocalLLaMA/comments/1m06nhe/nonreasoning_models_adopting_reasoning_behavior/ | false | false | self | 19 | null |
I built an AI PC - what should I try out first? | 0 | I've been away from the local AI scene for about 6 months - it feels like an eternity - so much has changed! I decided I wanted to get back into the game by building a new machine. Because so much has changed, I'm having a hard time gauging what I will actually be able to do and what models I will run with my new spe... | 2025-07-15T02:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m06lrz/i_built_an_ai_pc_what_should_i_try_out_first/ | wesarnquist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m06lrz | false | null | t3_1m06lrz | /r/LocalLLaMA/comments/1m06lrz/i_built_an_ai_pc_what_should_i_try_out_first/ | false | false | self | 0 | null |
Which local LLMs and/or libraries can I use to guide or train to identify where relevant data is located on a web page for web scraping purposes? Using natural language | 1 | I am trying to build a full crawler and scraper that runs completely locally with the help of an LLM to that it can work with any website and without writing code for each site.
**Example of a use case:**
I want to scrape the list of watches from Amazon without using traditional scrapers that rely on CSS selectors. ... | 2025-07-15T02:42:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m06bru/which_local_llms_andor_libraries_can_i_use_to/ | THenrich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m06bru | false | null | t3_1m06bru | /r/LocalLLaMA/comments/1m06bru/which_local_llms_andor_libraries_can_i_use_to/ | false | false | self | 1 | null |
🧠 Local-first AI in the browser isn’t sci-fi anymore. It’s actually useful now. | 1 | [removed] | 2025-07-15T01:52:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m059mc/localfirst_ai_in_the_browser_isnt_scifi_anymore/ | Responsible_Cash296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m059mc | false | null | t3_1m059mc | /r/LocalLLaMA/comments/1m059mc/localfirst_ai_in_the_browser_isnt_scifi_anymore/ | false | false | self | 1 | null |
Grok no more model Open-source? | 53 | I think that happened. Because Elon Musk forgot or canceled that Grok-2 would be open sourced after Grok-3 was stable. And now Grok-4 but Elon Musk did not open source Grok-2 or even Grok-3. I think Elon Musk is following the OpenAI or ANTHROP\C. Until now Elon Musk still makes announcements that he will open source Gr... | 2025-07-15T01:16:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m04ic2/grok_no_more_model_opensource/ | Brilliant_Stock_5137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m04ic2 | false | null | t3_1m04ic2 | /r/LocalLLaMA/comments/1m04ic2/grok_no_more_model_opensource/ | false | false | self | 53 | null |
tesla M40 vs 3070 8gb for LLM preformance | 1 | [removed] | 2025-07-15T01:11:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m04edy/tesla_m40_vs_3070_8gb_for_llm_preformance/ | imp_erfection_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m04edy | false | null | t3_1m04edy | /r/LocalLLaMA/comments/1m04edy/tesla_m40_vs_3070_8gb_for_llm_preformance/ | false | false | self | 1 | null |
EXAONE 4.0 32B | 286 | 2025-07-15T01:06:15 | https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-32B | minpeter2 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m04a20 | false | null | t3_1m04a20 | /r/LocalLLaMA/comments/1m04a20/exaone_40_32b/ | false | false | 286 | {'enabled': False, 'images': [{'id': '8nr2BOfjyJy107kRprOzRDlPGzQeiMZ1zJzNkF1pk6I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8nr2BOfjyJy107kRprOzRDlPGzQeiMZ1zJzNkF1pk6I.png?width=108&crop=smart&auto=webp&s=01c7ab98318dec8e4dfb9ad444e48cb42d1afee0', 'width': 108}, {'height': 116, 'url': 'h... | ||
Meta’s New Superintelligence Lab Is Discussing Major A.I. Strategy Changes | 37 | Last week, a small group of top members of the lab, including Alexandr Wang, 28, Meta’s new chief A.I. officer, discussed abandoning the company’s most powerful open source A.I. model, called Behemoth, in favor of developing a closed model, two people with knowledge of the matter said.
Meta had finished feeding in dat... | 2025-07-15T00:55:39 | sunshinecheung | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m041m4 | false | null | t3_1m041m4 | /r/LocalLLaMA/comments/1m041m4/metas_new_superintelligence_lab_is_discussing/ | false | false | default | 37 | {'enabled': True, 'images': [{'id': '3f68h6pzrxcf1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/3f68h6pzrxcf1.jpeg?width=108&crop=smart&auto=webp&s=1d2ac4cc26b27343c96b0a03f825276999d5b174', 'width': 108}, {'height': 196, 'url': 'https://preview.redd.it/3f68h6pzrxcf1.jpeg?width=216&crop=smart&auto=w... | |
What's up with the weird OR provider prices, they make no sense at all. | 0 | 2025-07-15T00:53:58 | Specter_Origin | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m040ag | false | null | t3_1m040ag | /r/LocalLLaMA/comments/1m040ag/whats_up_with_the_weird_or_provider_prices_they/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'kk8hpolkrxcf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/kk8hpolkrxcf1.jpeg?width=108&crop=smart&auto=webp&s=fc2ae195e1c6e48ec0a3481a2a73fbe283e00209', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/kk8hpolkrxcf1.jpeg?width=216&crop=smart&auto=w... | ||
Does vLLM not support Qwen3 ggufs? What sort of models/quants are people running in vLLM? | 10 | I'm currently using llama\_cpp with python bindings, but have heard that vLLM can be much faster, especially when patching.
But I'm not sure how to migrate my workflow that uses a Qwen3 gguf over to vLLM | 2025-07-15T00:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m03sio/does_vllm_not_support_qwen3_ggufs_what_sort_of/ | AuspiciousApple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m03sio | false | null | t3_1m03sio | /r/LocalLLaMA/comments/1m03sio/does_vllm_not_support_qwen3_ggufs_what_sort_of/ | false | false | self | 10 | null |
A very nice overview on how llama.cpp quantization works | 62 | https://youtu.be/vW30o4U9BFE | 2025-07-15T00:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m03sh9/a_very_nice_overview_on_how_llamacpp_quantization/ | Kooshi_Govno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m03sh9 | false | null | t3_1m03sh9 | /r/LocalLLaMA/comments/1m03sh9/a_very_nice_overview_on_how_llamacpp_quantization/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'zBHQlpO9zlBFyYQgYtAREE1WWUYjkWGTsfUFaGNxAj8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zBHQlpO9zlBFyYQgYtAREE1WWUYjkWGTsfUFaGNxAj8.jpeg?width=108&crop=smart&auto=webp&s=617b8e7b6a8328658f045f639d60a0618107415a', 'width': 108}, {'height': 162, 'url': '... |
Thank you, Unsloth! You guys are legends!!! (Now I just need 256GB of DDR5) | 237 | 2025-07-14T23:25:45 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m021nx | false | null | t3_1m021nx | /r/LocalLLaMA/comments/1m021nx/thank_you_unsloth_you_guys_are_legends_now_i_just/ | false | false | default | 237 | {'enabled': True, 'images': [{'id': 'nl35mhaybxcf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nl35mhaybxcf1.jpeg?width=108&crop=smart&auto=webp&s=36b347c1b0477d5c0c50ee12646d88d4534cf13b', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nl35mhaybxcf1.jpeg?width=216&crop=smart&auto=... | ||
Did anyone manage to use nllb with cuda acceleration on Windows? | 0 | I installed Meta nllb language translation on Windows, but it only uses the cpu which is slow, did anyone manage to figure out how to use cuda acceleration on Windows? | 2025-07-14T22:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m01d8x/did_anyone_manage_to_use_nllb_with_cuda/ | Mashic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m01d8x | false | null | t3_1m01d8x | /r/LocalLLaMA/comments/1m01d8x/did_anyone_manage_to_use_nllb_with_cuda/ | false | false | self | 0 | null |
Moonshot AI’s open source Kimi K2 outperforms GPT-4 in key benchmarks | 67 | 2025-07-14T22:46:29 | https://moonshotai.github.io/Kimi-K2/ | yogthos | moonshotai.github.io | 1970-01-01T00:00:00 | 0 | {} | 1m013ou | false | null | t3_1m013ou | /r/LocalLLaMA/comments/1m013ou/moonshot_ais_open_source_kimi_k2_outperforms_gpt4/ | false | false | default | 67 | null | |
Meta on track to be first lab with a 1GW supercluster | 189 | 2025-07-14T22:43:36 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0115d | false | null | t3_1m0115d | /r/LocalLLaMA/comments/1m0115d/meta_on_track_to_be_first_lab_with_a_1gw/ | false | false | 189 | {'enabled': True, 'images': [{'id': '-MjK6OyxWPNhUMp76GWSxVzUaVyreb5xWXtn3a_Nrzc', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/584vdadc4xcf1.png?width=108&crop=smart&auto=webp&s=aec9db4a133d2a998b21becc7885a200bea8a2bc', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/584vdadc4xcf1.png?... | |||
Meta on track to be the first with a 1GW supercluster | 1 | [deleted] | 2025-07-14T22:42:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m00zvm | false | null | t3_1m00zvm | /r/LocalLLaMA/comments/1m00zvm/meta_on_track_to_be_the_first_with_a_1gw/ | false | false | default | 1 | null | ||
Help needed: 20+ devs on the local model | 0 | After reading all the amazing post of yours, I've bought in. About to offer my management a localized coding agent, to prevent code and API keys leaks. From 20 to 50 people coding at any moment.
Locally I'd need a used 3080+ card. But what type of the hardware I'm looking for to provide for 20+ folks? | 2025-07-14T22:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m00yn1/help_needed_20_devs_on_the_local_model/ | 3dom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m00yn1 | false | null | t3_1m00yn1 | /r/LocalLLaMA/comments/1m00yn1/help_needed_20_devs_on_the_local_model/ | false | false | self | 0 | null |
Enough resources for light AI workloads? | 1 | Long story short I won 2 sticks of 32 GB DDR5 ram but I only have a gaming laptop, and I have always wanted to build a PC.
can I skip buying a GPU for now and put my unbelievable 64GBs to use with a CPU and run LLMs and STT models from it, in terms of loading the models I know that I will be able to load bigger models ... | 2025-07-14T21:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lzzka4/enough_resources_for_light_ai_workloads/ | EyasDBoi_i | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzzka4 | false | null | t3_1lzzka4 | /r/LocalLLaMA/comments/1lzzka4/enough_resources_for_light_ai_workloads/ | false | false | self | 1 | null |
MMLU-ProX: A Multilingual Benchmark for Advanced Large Language Model Evaluation | 35 | MMLU-ProX is a multilingual benchmark that extends the challenging MMLU-Pro benchmark to 29 typologically diverse languages, designed to evaluate the cross-lingual reasoning capabilities of large language models (LLMs). Built through a rigorous four-stage translation pipeline using state-of-the-art LLMs (primarily Clau... | 2025-07-14T21:36:24 | https://www.reddit.com/gallery/1lzzcje | Balance- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lzzcje | false | null | t3_1lzzcje | /r/LocalLLaMA/comments/1lzzcje/mmluprox_a_multilingual_benchmark_for_advanced/ | false | false | 35 | null | |
What are the best practices for vector search + filtering with LLM? | 3 | hey, I am building a small tool for myself to load up links, files, pdfs, photos, text and later recall them by text, cuz i anxious about losing this links, and presume i am going to need them later, and i dont like managers with folders to organise those links because at some point it is whole another job.
I am think... | 2025-07-14T21:24:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lzz13f/what_are_the_best_practices_for_vector_search/ | andrewshvv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzz13f | false | null | t3_1lzz13f | /r/LocalLLaMA/comments/1lzz13f/what_are_the_best_practices_for_vector_search/ | false | false | self | 3 | null |
Kimi K2 tops creative writing benchmark | 317 | 2025-07-14T21:19:11 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzywie | false | null | t3_1lzywie | /r/LocalLLaMA/comments/1lzywie/kimi_k2_tops_creative_writing_benchmark/ | false | false | default | 317 | {'enabled': True, 'images': [{'id': 'q48f55vcpwcf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/q48f55vcpwcf1.jpeg?width=108&crop=smart&auto=webp&s=3b78c6da6f3a69e12a60113ac6638feb8001f4a9', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/q48f55vcpwcf1.jpeg?width=216&crop=smart&auto=w... | ||
GitHub - SrijanSriv211/Palm: Palm is a tree, not a language model | 6 | It's a simple experimental language model architecture based on Andrej Karpathy's nanoGPT project.
It's an experiment to try different improvements of transformers architecture. Some improvement has been brought about by the following techniques:
- Modernized architecture: Rotary embeddings, QK-Norm, and ReLU²
- Untie... | 2025-07-14T21:05:41 | https://github.com/SrijanSriv211/Palm | SrijSriv211 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lzyk1k | false | null | t3_1lzyk1k | /r/LocalLLaMA/comments/1lzyk1k/github_srijansriv211palm_palm_is_a_tree_not_a/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'ttfxke-7hOI2ccNHW6Ntk_j3qIw_X09SaAhgQ-PpovQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ttfxke-7hOI2ccNHW6Ntk_j3qIw_X09SaAhgQ-PpovQ.png?width=108&crop=smart&auto=webp&s=ef72931372700484d624279573ebd30522837f10', 'width': 108}, {'height': 108, 'url': 'h... |
Code assistant way to start | 1 | Hello,
looking for a place to start to read and check a bit, but wanted to ask to just select good starting point.
Currently, I have rtx 3070 8gb. What model can i run locally to get started with code assistant (means, asking about 'algoritm' snippets or checking code.
Also, what I need to learn to setup Ai if I w... | 2025-07-14T20:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lzy059/code_assistant_way_to_start/ | machond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzy059 | false | null | t3_1lzy059 | /r/LocalLLaMA/comments/1lzy059/code_assistant_way_to_start/ | false | false | self | 1 | null |
Built and launched on the App Store a price drop tracker iOS app in under a month | 0 | **Took this from idea to launch in under 30 days. Just dropped Priceify on the iOS App Store – it’s a simple app that tracks product prices and notifies you instantly when the price drops.**
You just paste in a product link (like Amazon), and it keeps checking the price in the background. If it drops, you get a push n... | 2025-07-14T20:18:02 | https://v.redd.it/80mwg6ncewcf1 | punchingops | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzxb09 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/80mwg6ncewcf1/DASHPlaylist.mpd?a=1755116298%2CMjhhZWZhMmUxYzk3OTA3NzQwNDM5MmEwYTk1MGJiNWI5ZWRkZGJmY2I1NTI5YTBiOGUzY2RhOWVjNGM2MjY1Yw%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/80mwg6ncewcf1/DASH_720.mp4?source=fallback', 'ha... | t3_1lzxb09 | /r/LocalLLaMA/comments/1lzxb09/built_and_launched_on_the_app_store_a_price_drop/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'aTh4eWw4bmNld2NmMSIoHfgWpFrYBgcjpqLBhiLD5Kpy4g1l3ttzbTNXbIjM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aTh4eWw4bmNld2NmMSIoHfgWpFrYBgcjpqLBhiLD5Kpy4g1l3ttzbTNXbIjM.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ae6b0dda2b86b43ba6904e29dd44334fd5a... | |
NVMe for local LLM is too slow. Any ideas? | 3 | So, here is the problem. I'm actually facing it as I'm writing this post.
I use multiple LLM models (32b and 70b at Q4 or Q8, qwen, qwq, deepseek, llama, etc). I also use Open WebUI for prompting them. What I like the most is the ability to have a single prompt sent to multiple LLMs and get their outputs side by side.... | 2025-07-14T20:06:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lzx039/nvme_for_local_llm_is_too_slow_any_ideas/ | ChopSticksPlease | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzx039 | false | null | t3_1lzx039 | /r/LocalLLaMA/comments/1lzx039/nvme_for_local_llm_is_too_slow_any_ideas/ | false | false | self | 3 | null |
Building a Focus App with Local LLMs — But Latency Is a Real Challenge , seeking suggestions | 0 | I’m working on a small AI app called **Preceptor** — think of it like a privacy-first accountability partner that helps you stay focused **without spying on your screen**
Here’s the idea:
* It runs **entirely offline**, using local LLMs via [Ollama](https://ollama.com/)
* Tracks **which app or browser tab** you’re on... | 2025-07-14T19:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lzwps3/building_a_focus_app_with_local_llms_but_latency/ | Frosty-Cap-4282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzwps3 | false | null | t3_1lzwps3 | /r/LocalLLaMA/comments/1lzwps3/building_a_focus_app_with_local_llms_but_latency/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
RHPS – A Real-Time Hallucination Prevention Framework for LLMs | 1 | [removed] | 2025-07-14T19:43:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lzwdyl/rhps_a_realtime_hallucination_prevention/ | Own_Cryptographer271 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzwdyl | false | null | t3_1lzwdyl | /r/LocalLLaMA/comments/1lzwdyl/rhps_a_realtime_hallucination_prevention/ | false | false | self | 1 | null |
Open Source Alternative to NotebookLM | 105 | For those of you who aren't familiar with SurfSense, it aims to be the **open-source alternative to NotebookLM, Perplexity, or Glean.**
In short, it's a **Highly Customizable AI Research Agent** that connects to your personal external sources and search engines (Tavily, LinkUp), Slack, Linear, Notion, YouTube, GitHub,... | 2025-07-14T19:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lzw6yu/open_source_alternative_to_notebooklm/ | Uiqueblhats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzw6yu | false | null | t3_1lzw6yu | /r/LocalLLaMA/comments/1lzw6yu/open_source_alternative_to_notebooklm/ | false | false | self | 105 | {'enabled': False, 'images': [{'id': 'noowT43T3LN7OTEgV59heiSGbz5GQhyxeea4MTjqxTg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/noowT43T3LN7OTEgV59heiSGbz5GQhyxeea4MTjqxTg.png?width=108&crop=smart&auto=webp&s=e4cc9007548328d59c5d49b07997ca37e0c33349', 'width': 108}, {'height': 108, 'url': 'h... |
I want to hire 100k programmers and create the first tech giant startup | 0 | I feel like people really limit themselves and they need to understand the concept of scale and instead of settling on something small you can go for something big.
A lot of programmers are getting laid off and the tech space is in an interesting spot with the rise of AI. So I thought with all the tech layoffs why not... | 2025-07-14T19:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lzvuu7/i_want_to_hire_100k_programmers_and_create_the/ | zeeza48 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzvuu7 | false | null | t3_1lzvuu7 | /r/LocalLLaMA/comments/1lzvuu7/i_want_to_hire_100k_programmers_and_create_the/ | false | false | self | 0 | null |
What open weight models do you find to be the best for frontend dev and coding? | 1 | [removed] | 2025-07-14T19:02:56 | adviceguru25 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzvabw | false | null | t3_1lzvabw | /r/LocalLLaMA/comments/1lzvabw/what_open_weight_models_do_you_find_to_be_the/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '772nbtl11wcf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/772nbtl11wcf1.jpeg?width=108&crop=smart&auto=webp&s=91ae8c7c8bb31435f39fd55449103c260251b24c', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/772nbtl11wcf1.jpeg?width=216&crop=smart&auto=w... | |
7/14 Update on UI/UX Benchmark: More OS models, Kimi-K2, Changelog | 1 | Read my [recent post](https://www.reddit.com/r/LocalLLaMA/comments/1lxth6s/711_update_on_design_arena_added_devstral_qwen/) for context but on the [benchmark](https://www.designarena.ai/) for evaluating LLMs on UI/UX capabilities, we have some updates:
1. More open source models were added for the general categories ... | 2025-07-14T18:59:02 | https://i.redd.it/oetehnuc0wcf1 | adviceguru25 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzv6c3 | false | null | t3_1lzv6c3 | /r/LocalLLaMA/comments/1lzv6c3/714_update_on_uiux_benchmark_more_os_models/ | false | false | default | 1 | null |
Meta’s New Superintelligence Lab Is Discussing Major A.I. Strategy Changes | 106 | 2025-07-14T18:54:00 | https://www.nytimes.com/2025/07/14/technology/meta-superintelligence-lab-ai.html | showmeufos | nytimes.com | 1970-01-01T00:00:00 | 0 | {} | 1lzv16g | false | null | t3_1lzv16g | /r/LocalLLaMA/comments/1lzv16g/metas_new_superintelligence_lab_is_discussing/ | false | false | 106 | {'enabled': False, 'images': [{'id': '62QXtiCManuS6UimUaWcoUxH8gOETN8-9D6ljAVaZH0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/62QXtiCManuS6UimUaWcoUxH8gOETN8-9D6ljAVaZH0.jpeg?width=108&crop=smart&auto=webp&s=f431ed3ef795de81f0d9be2452ed2466f4727f88', 'width': 108}, {'height': 113, 'url': '... | ||
If you limit context to 4k tokens, which models today beat Llama2-70B from 2 years ago? | 7 | Obviously this is a silly question. 4k context is limiting to the point where even dumber models are "better" for almost any pipeline and use case.
But for those who have been running local LLMs since then, what are you observations (your experience outside of benchmark JPEG's)? What model sizes now beat Llama2-70B in... | 2025-07-14T18:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lzuaa3/if_you_limit_context_to_4k_tokens_which_models/ | EmPips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzuaa3 | false | null | t3_1lzuaa3 | /r/LocalLLaMA/comments/1lzuaa3/if_you_limit_context_to_4k_tokens_which_models/ | false | false | self | 7 | null |
Is the output of only the shared expert(s) in a MOE model coherent? | 0 | Before I fiddle with this, I wanted to see if anyone else has tried deactivating all but the shared expert in a MoE model to evaluate whether its output is coherent ... or if it can be trivially trained to be useful.
More broadly, I'm very interested in the potential of training a single model to work with different i... | 2025-07-14T18:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lzu9e8/is_the_output_of_only_the_shared_experts_in_a_moe/ | gofiend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzu9e8 | false | null | t3_1lzu9e8 | /r/LocalLLaMA/comments/1lzu9e8/is_the_output_of_only_the_shared_experts_in_a_moe/ | false | false | self | 0 | null |
What hardware is needed for 10 developers running Devsral in parallel mode? | 1 | After reading all the stuff about our glorious success - I've been "bright" enough to volunteer to setup a server hardware running a code/programming assistant for the company. Code documentation, refactor, code auto-completion - nothing complicated, the usual bits which you can sell for the mere $3B once scaled (i.e. ... | 2025-07-14T18:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lztx4q/what_hardware_is_needed_for_10_developers_running/ | 3dom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lztx4q | false | null | t3_1lztx4q | /r/LocalLLaMA/comments/1lztx4q/what_hardware_is_needed_for_10_developers_running/ | false | false | self | 1 | null |
Is real-time voice-to-voice still science fiction? | 25 | Hi everyone, as the title says: is it possible to have real-time voice-to-voice interaction running locally, or are we still not there yet?
I'd like to improve my speaking skills (including pronunciation) in English and Japanese, and I thought it would be great to have conversations with a local LLM.
It would also ... | 2025-07-14T18:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lzts1z/is_realtime_voicetovoice_still_science_fiction/ | junior600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzts1z | false | null | t3_1lzts1z | /r/LocalLLaMA/comments/1lzts1z/is_realtime_voicetovoice_still_science_fiction/ | false | false | self | 25 | null |
Kimi-K2 🤝 Anthropic | Blog Post by Justin Wong | 0 | Congrats to the Moonshot team on Kimi-K2!
* [Justin Wong (Kimi-K2) Blog Post](https://macro.com/app/md/29ef9c8b-b403-47d8-91c9-ff0520bb43c7/md/629ed15b-bb01-431d-a446-b4aa36436780)
https://preview.redd.it/ncv5le0hpvcf1.jpg?width=1486&format=pjpg&auto=webp&s=ebd82833159ab0dcaf9dae3efe8cf4692e1d9d64
| 2025-07-14T17:59:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lztjtc/kimik2_anthropic_blog_post_by_justin_wong/ | LeveredRecap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lztjtc | false | null | t3_1lztjtc | /r/LocalLLaMA/comments/1lztjtc/kimik2_anthropic_blog_post_by_justin_wong/ | false | false | 0 | null | |
Recorded a userflow for my vibecoding pet project - character selection, model setup, inline replies, and image generation | 22 | 2025-07-14T17:28:30 | https://v.redd.it/bx3hl3q5kvcf1 | RIPT1D3_Z | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzsoqc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bx3hl3q5kvcf1/DASHPlaylist.mpd?a=1755106125%2CZTEzNTA2YmFmYTc2NmRlYmY5MWUzMWQ3YzI1Yjk0ZjgwNWI0NWNkZGY4MjNkYTE0YTZhMDBlYzMzMTA1NGYyNg%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/bx3hl3q5kvcf1/DASH_1080.mp4?source=fallback', 'h... | t3_1lzsoqc | /r/LocalLLaMA/comments/1lzsoqc/recorded_a_userflow_for_my_vibecoding_pet_project/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'cmd5cWMzcDVrdmNmMa9ls_clNk5aCuPt4SMo0ohCKwSPbJh7Qncj7lTLMron', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/cmd5cWMzcDVrdmNmMa9ls_clNk5aCuPt4SMo0ohCKwSPbJh7Qncj7lTLMron.png?width=108&crop=smart&format=pjpg&auto=webp&s=d229202bb26b8decf4001cbe9fd8bb4d45adf... | ||
Ollama, Why No Reka Flash, SmolLM3, GLM-4? | 9 | I don't expect Ollama to have every finetuned models on their main library, and I understand that you can import gguf models from hugging face.
Still, it seems pretty odd that they're missing Reka Flash-3.2, SmolLM3, GLM-4. I believe other platforms like LMStudio, MLX, unsloth, etc have them. | 2025-07-14T17:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lzsnna/ollama_why_no_reka_flash_smollm3_glm4/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzsnna | false | null | t3_1lzsnna | /r/LocalLLaMA/comments/1lzsnna/ollama_why_no_reka_flash_smollm3_glm4/ | false | false | self | 9 | null |
Recorded a userflow for my vibecoding pet project - character selection, model setup, inline replies, and image generation | 1 | Mostly Proof of Concept as for now, but I like how it goes. | 2025-07-14T17:27:24 | https://v.redd.it/hhgkx0ltjvcf1 | RIPT1D3_Z | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzsnm9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hhgkx0ltjvcf1/DASHPlaylist.mpd?a=1755106058%2CYzhmNDMyYTBhMzNiNThjMGJkOWFhNzg2NmMyYjVhZWYyODM2ODQ5Yzg0NTc3NTdmZDA3NGRmOTA4YzYxYTIzMQ%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/hhgkx0ltjvcf1/DASH_1080.mp4?source=fallback', 'h... | t3_1lzsnm9 | /r/LocalLLaMA/comments/1lzsnm9/recorded_a_userflow_for_my_vibecoding_pet_project/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MWE2MXAxbHRqdmNmMa9ls_clNk5aCuPt4SMo0ohCKwSPbJh7Qncj7lTLMron', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MWE2MXAxbHRqdmNmMa9ls_clNk5aCuPt4SMo0ohCKwSPbJh7Qncj7lTLMron.png?width=108&crop=smart&format=pjpg&auto=webp&s=3133857f54c9319290f0b47d38f6603c49886... | |
Haidar Ali Deposit your account now 03249677150 "جیت کا نیا نام — Betpro جہاں ہر بیٹ ہے ایک نیا موقع! کامیابی کی دنیا کا دروازہ، اب صرف ایک کلک دور! اعتماد، رفتار اور نتیجہ — سب کچھ ایک ہی جگہ! ⚡ Bet smart. Win big. Play with Betpro! ⚡ | 1 | 2025-07-14T17:17:48 | Haidarali9677150 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzse9h | false | null | t3_1lzse9h | /r/LocalLLaMA/comments/1lzse9h/haidar_ali_deposit_your_account_now_03249677150/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'alr57g0bivcf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/alr57g0bivcf1.jpeg?width=108&crop=smart&auto=webp&s=4fdcc5a41299604b439f3b62abc5287d3f92de7f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/alr57g0bivcf1.jpeg?width=216&crop=smart&auto=... | ||
Any Uncensored llms worth hosting on an android samsung a16? | 1 | [removed] | 2025-07-14T17:14:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lzsaqr/any_uncensored_llms_worth_hosting_on_an_android/ | L8_Bloom3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzsaqr | false | null | t3_1lzsaqr | /r/LocalLLaMA/comments/1lzsaqr/any_uncensored_llms_worth_hosting_on_an_android/ | false | false | nsfw | 1 | null |
Esoteric Game with Llama3.2 | 2 | Hey everyone! I’ve been experimenting with Ollama locally and ended up creating a little game called Holy Arcana: From Profane to Divine.
It uses Llama-3.2 to generate poetry and responses as you make your way through Tarot-inspired challenges and Kabbalistic paths.
It’s just something I made for fun, mixing AI wit... | 2025-07-14T16:54:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lzrqoi/esoteric_game_with_llama32/ | Ambitious_Ad497 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzrqoi | false | null | t3_1lzrqoi | /r/LocalLLaMA/comments/1lzrqoi/esoteric_game_with_llama32/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'rvPrk_aITGFkAgs_yRAMVUD2ygqKO5b_10z3qFy3B30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rvPrk_aITGFkAgs_yRAMVUD2ygqKO5b_10z3qFy3B30.png?width=108&crop=smart&auto=webp&s=ab5b82fc39c178d66c6d577d1e97474b18e24c69', 'width': 108}, {'height': 108, 'url': 'h... |
A practical handbook on Context Engineering with the latest research from IBM Zurich, ICML, Princeton, and more. | 42 | [https://github.com/davidkimai/Context-Engineering](https://github.com/davidkimai/Context-Engineering) | 2025-07-14T16:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lzql0b/a_practical_handbook_on_context_engineering_with/ | recursiveauto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzql0b | false | null | t3_1lzql0b | /r/LocalLLaMA/comments/1lzql0b/a_practical_handbook_on_context_engineering_with/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'i9rn46kobjc-H-jAw7ePfKf2WD9TjEWEMwzPTaQtEgA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i9rn46kobjc-H-jAw7ePfKf2WD9TjEWEMwzPTaQtEgA.png?width=108&crop=smart&auto=webp&s=f42f9a68c36f1e2ccc27fbc08f8a4e8edb8f1a70', 'width': 108}, {'height': 108, 'url': 'h... |
How to improve response times for multimodal requests? | 0 | I am running Gemma 3 12B on my local computer. My prompt is about 1000 tokens of text + 3-4 images. My computer is just a regular AMD CPU (no GPU) + 64GB of DDR5 RAM, so understandably the response is slow. Particularly I have noticed that it takes more time to just process my input.
My question is what hardware would... | 2025-07-14T16:06:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lzqh66/how_to_improve_response_times_for_multimodal/ | coolahavoc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzqh66 | false | null | t3_1lzqh66 | /r/LocalLLaMA/comments/1lzqh66/how_to_improve_response_times_for_multimodal/ | false | false | self | 0 | null |
How to Automate your Job Search with AI Agents; What We Built and Learned | 11 |
It started as a tool to help me find jobs and cut down on the countless hours each week I spent filling out applications. Pretty quickly people were asking if they could use it as well, so we made it available to more people.
How It Works:
1) Manual Mode: View your personal job matches with their score and apply yo... | 2025-07-14T15:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lzq21a/how_to_automate_your_job_search_with_ai_agents/ | Accomplished-Leg3657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzq21a | false | null | t3_1lzq21a | /r/LocalLLaMA/comments/1lzq21a/how_to_automate_your_job_search_with_ai_agents/ | false | false | self | 11 | null |
Kimi K2 1.8bit Unsloth Dynamic GGUFs | 364 | Hey everyone - there are some **245GB quants (80% size reduction)** for Kimi K2 at https://huggingface.co/unsloth/Kimi-K2-Instruct-GGUF. The Unsloth dynamic Q2\_K\_XL (381GB) surprisingly can one-shot our hardened Flappy Bird game and also the Heptagon game.
Please use `-ot ".ffn_.*_exps.=CPU"` to offload MoE layers t... | 2025-07-14T15:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lzps3b/kimi_k2_18bit_unsloth_dynamic_ggufs/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzps3b | false | null | t3_1lzps3b | /r/LocalLLaMA/comments/1lzps3b/kimi_k2_18bit_unsloth_dynamic_ggufs/ | false | false | self | 364 | {'enabled': False, 'images': [{'id': 'yxH9RYoAESwYJz6seCzu-b7mAiGRtDXgd-N_V6wb3cw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yxH9RYoAESwYJz6seCzu-b7mAiGRtDXgd-N_V6wb3cw.png?width=108&crop=smart&auto=webp&s=539e7c53ad8fe4d04c6029c11344ff605d38589a', 'width': 108}, {'height': 116, 'url': 'h... |
LM Studio cant use my gpu as main | 0 | as title says, lm studio always uses my cpu, I want to make lm uses the GPU tried several changes
Laptop specs
24gb ram
3070 8gb ram
i9-11 gen
i cant seem to use gpu as main resource for llama in lmstudio
[settings](https://preview.redd.it/v0fsjfwkuucf1.png?width=1226&format=png&auto=webp&s=37decc5be289... | 2025-07-14T15:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lzoxbl/lm_studio_cant_use_my_gpu_as_main/ | Zinxdia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzoxbl | false | null | t3_1lzoxbl | /r/LocalLLaMA/comments/1lzoxbl/lm_studio_cant_use_my_gpu_as_main/ | false | false | 0 | null | |
Best LLM for Educators ? | 1 | Any advice ? :) | 2025-07-14T14:59:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lzooed/best_llm_for_educators/ | Creative_Structure22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzooed | false | null | t3_1lzooed | /r/LocalLLaMA/comments/1lzooed/best_llm_for_educators/ | false | false | self | 1 | null |
I ditch all LLM framework and use only OpenAI SDK for everything, I start loving building AI application this way. | 49 | I've tried several LLM frameworks and libraries, each with their own direction like Haystack, LangChain, etc. I've also tried several agent frameworks like AutoGen, SmolAgent, and Strands. All I can say about these frameworks is that they're "exhausting."
I feel like every application built with these tools consumes t... | 2025-07-14T14:47:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lzocuk/i_ditch_all_llm_framework_and_use_only_openai_sdk/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzocuk | false | null | t3_1lzocuk | /r/LocalLLaMA/comments/1lzocuk/i_ditch_all_llm_framework_and_use_only_openai_sdk/ | false | false | self | 49 | null |
Project Idea: A REAL Community-driven LLM Stack | 0 | **Context of my project idea:**
I have been doing some research on self hosting LLMs and, of course, quickly came to the realisation on how complicated it seems to be for a solo developer to pay for the rental costs of an enterprise-grade GPU and run a SOTA open-source model like Kimi K2 32B or Qwen 32B. Renting per h... | 2025-07-14T14:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lznxy5/project_idea_a_real_communitydriven_llm_stack/ | Budget_Map_3333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lznxy5 | false | null | t3_1lznxy5 | /r/LocalLLaMA/comments/1lznxy5/project_idea_a_real_communitydriven_llm_stack/ | false | false | self | 0 | null |
Is there a better frontend than OpenWebui for RAG? | 3 | Recently decided to try out openwebui and something i noticed is that it does no batching for embedding multiple files, and in the scale of 5000 files it feels like it will take the better part of 5 hours, i can write a tiny python script to embed all of these files (and view them in qdrant) in an amount of time that i... | 2025-07-14T14:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lzna91/is_there_a_better_frontend_than_openwebui_for_rag/ | Capable-Ad-7494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzna91 | false | null | t3_1lzna91 | /r/LocalLLaMA/comments/1lzna91/is_there_a_better_frontend_than_openwebui_for_rag/ | false | false | self | 3 | null |
Suggestions for ai agent framework and ai model for Text-to-SQL ai agent | 2 | I want to create a new AI agent for my MySQL database. The database tables are complex and require extensive documentation to make them understandable for the AI to query effectively.
I need guidance on selecting the right model and framework for this project. | 2025-07-14T14:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lzn9th/suggestions_for_ai_agent_framework_and_ai_model/ | M7mDSa3eD_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzn9th | false | null | t3_1lzn9th | /r/LocalLLaMA/comments/1lzn9th/suggestions_for_ai_agent_framework_and_ai_model/ | false | false | self | 2 | null |
Agentic Secretary System - Tips and Recommendations? | 3 | Hello!
I'm currently investigating and planning a very fun project, my ultimate personal assistant.
The idea is to have a multi-agent system, with one main point of contact; "The Secretary". Then I have task-specific agents with expertise in different areas, like my different work projects, or notion updating etc. I ... | 2025-07-14T13:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lzn4ae/agentic_secretary_system_tips_and_recommendations/ | Boltyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzn4ae | false | null | t3_1lzn4ae | /r/LocalLLaMA/comments/1lzn4ae/agentic_secretary_system_tips_and_recommendations/ | false | false | self | 3 | null |
A ChatGPT alternative for us (LLM Pigeon update). | 0 | Hi guys,
roughly a month has past since my first post about the very first and basic version of LLM Pigeon.
Today the App Store has approved the update.
It's still all free and all open source.
For people who might not know what it is:
LLM Pigeon is an iOS app that allows you to interact with LLMs running... | 2025-07-14T13:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lzmkbb/a_chatgpt_alternative_for_us_llm_pigeon_update/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzmkbb | false | null | t3_1lzmkbb | /r/LocalLLaMA/comments/1lzmkbb/a_chatgpt_alternative_for_us_llm_pigeon_update/ | false | false | self | 0 | null |
After Kimi K2 Is Released: No Longer Just a ChatBot | 331 | This post is a personal reflection penned by a Kimi team member shortly after the launch of Kimi K2. I found the author’s insights genuinely thought-provoking. The original Chinese version is [here](https://bigeagle.me/2025/07/kimi-k2/)—feel free to read it in full (and of course you can use Kimi K2 as your translator)... | 2025-07-14T13:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lzm645/after_kimi_k2_is_released_no_longer_just_a_chatbot/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzm645 | false | null | t3_1lzm645 | /r/LocalLLaMA/comments/1lzm645/after_kimi_k2_is_released_no_longer_just_a_chatbot/ | false | false | self | 331 | null |
Why are so many top AI scientists of Chinese origin, even in the US? | 0 | 2025-07-14T13:07:57 | VR-Person | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzlxtr | false | null | t3_1lzlxtr | /r/LocalLLaMA/comments/1lzlxtr/why_are_so_many_top_ai_scientists_of_chinese/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'dpzmjzf45WWcdHnkKVnJTThFWGlSxai92OkH6rW1En0', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/cp2apx4y8ucf1.jpeg?width=108&crop=smart&auto=webp&s=2272f3ab8109edc2e46bb9a4b7bbea9d44daae23', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/cp2apx4y8ucf1.j... | |||
getting started with code assistant | 2 | Hello,
looking for a place to start to read and check a bit, but wanted to ask to just select good starting point.
Currently I have rtx 3070 8gb. What model can i run locally to get started with code assistant (means, asking about 'algoritm' snippets or checking code.
Also, what I need to learn to setup Ai if I wo... | 2025-07-14T12:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lzlm2t/getting_started_with_code_assistant/ | machond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzlm2t | false | null | t3_1lzlm2t | /r/LocalLLaMA/comments/1lzlm2t/getting_started_with_code_assistant/ | false | false | self | 2 | null |
UTCP: A safer, scalable tool-calling alternative to MCP | 790 | 2025-07-14T12:33:01 | juanviera23 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzl5zk | false | null | t3_1lzl5zk | /r/LocalLLaMA/comments/1lzl5zk/utcp_a_safer_scalable_toolcalling_alternative_to/ | false | false | 790 | {'enabled': True, 'images': [{'id': 'kbRMMR47HDIi7lZVVAy5mGTwVKuCZQBEJufsqMy9_24', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/wv84vx7h3ucf1.png?width=108&crop=smart&auto=webp&s=af7471a080008d1b2a11663755e2386042538cdb', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/wv84vx7h3ucf1.png... | |||
Suggestions/Alternatives for Image captions with efficient system requirements | 1 | I am new to AI/ML. We are trying to generate captions for images. I tested various versions of Qwen 2.5 VL.
I was able to run these models in Google Enterprise Colab with g2-standard-8 (8 vCPU, 32GB) and L4 (24 GB GDDR6) GPU.
Qwen 2.5 VL 3B
Caption generation - average time taken for max pixel 768\*768 - 1.62... | 2025-07-14T12:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lzkrwg/suggestionsalternatives_for_image_captions_with/ | palaniappan_05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzkrwg | false | null | t3_1lzkrwg | /r/LocalLLaMA/comments/1lzkrwg/suggestionsalternatives_for_image_captions_with/ | false | false | self | 1 | null |
Multiple 5060 Ti's | 1 | Hi, I need to build a lab AI-Inference/Training/Development machine. Basically something to just get started get experience and burn as less money as possible. Due to availability problems my first choice (cheaper RTX PRO Blackwell cards) are not available. Now my question:
Would it be viable to use multiple 5060 Ti (... | 2025-07-14T11:52:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lzkcg3/multiple_5060_tis/ | snorixx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzkcg3 | false | null | t3_1lzkcg3 | /r/LocalLLaMA/comments/1lzkcg3/multiple_5060_tis/ | false | false | self | 1 | null |
A mid range PC build for Dual GPU Local LLMs and SLMs. | 0 | I want to build a mid range Desktop to fine-tuning and host Small Language Models and LLMs.
I am thinking about using 2 AMD Radeon 9060XT 16GB to reach 32 GB VRAM on budget.
Will it help? Since 32GB Cards like Nvidia RTX5090 are absurdly expensive. What are your suggestions about the Motherboard and CPU for my build? S... | 2025-07-14T11:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lzk041/a_mid_range_pc_build_for_dual_gpu_local_llms_and/ | iammhk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzk041 | false | null | t3_1lzk041 | /r/LocalLLaMA/comments/1lzk041/a_mid_range_pc_build_for_dual_gpu_local_llms_and/ | false | false | self | 0 | null |
Are there any local LLMS that support Browser use MCP? | 0 | I have tried Cline/Roo Code/OpenHands
Used Devstral, GLM4 32b, codellama etc
When trying to navigate website using MCP server, the LLM gets stuck and cannot press on actual buttons and escape the captcha page / allow cookies pop up.
Is there a better model to try? Or its only API claude model | 2025-07-14T11:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lzjsu3/are_there_any_local_llms_that_support_browser_use/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzjsu3 | false | null | t3_1lzjsu3 | /r/LocalLLaMA/comments/1lzjsu3/are_there_any_local_llms_that_support_browser_use/ | false | false | self | 0 | null |
Ollama retaining history? | 0 | so ive hosted ollama locally on my system on [http://localhost:11434/api/generate](http://localhost:11434/api/generate) and was testing it out a bit and it seems that between separate fetch calls, ollama seems to be retaining some memory.
i don't understand why this would happen because as much as i have seen modern l... | 2025-07-14T11:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lzjlvi/ollama_retaining_history/ | DimensionEnergy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzjlvi | false | null | t3_1lzjlvi | /r/LocalLLaMA/comments/1lzjlvi/ollama_retaining_history/ | false | false | self | 0 | null |
Foundations of Large Language Models (LLMs) | NLP Lab Research | 9 | [Foundations of Large Language Models (LLMs)](https://macro.com/app/pdf/1ace3262-d707-4dfc-9111-e3c5e3df96a1/md/ce8a6add-6d5e-48b5-be8b-4b365867d458) | 2025-07-14T10:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lzjaf5/foundations_of_large_language_models_llms_nlp_lab/ | LeveredRecap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzjaf5 | false | null | t3_1lzjaf5 | /r/LocalLLaMA/comments/1lzjaf5/foundations_of_large_language_models_llms_nlp_lab/ | false | false | self | 9 | null |
Anyone else interested in a 100% on-device browser AI assistant? | 0 | Hey folks!
I’m curious — has anyone thought about building or using an AI assistant that works entirely in the browser, without sending any data to external servers or APIs?
We’re experimenting with something like that. It runs local models via Ollama, and all tasks — including web search, summarization, translation,... | 2025-07-14T10:14:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lzimcq/anyone_else_interested_in_a_100_ondevice_browser/ | InfiniteJX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzimcq | false | null | t3_1lzimcq | /r/LocalLLaMA/comments/1lzimcq/anyone_else_interested_in_a_100_ondevice_browser/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'SCRdaLyzDoTV55-cdYQI2oN4jk8P9OR4Ai2vp6y5bdg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SCRdaLyzDoTV55-cdYQI2oN4jk8P9OR4Ai2vp6y5bdg.png?width=108&crop=smart&auto=webp&s=307824a95b90814f4b420f060a268603bfc26a80', 'width': 108}, {'height': 108, 'url': 'h... |
Annoyed with LibreChat | 13 | Few weeks ago I decided to give LibreChat a try. OpenWebUI was so ... let's me say ... dont know .. clumsy?
So I went to try LibreChat. I was happy first. More or less. Basic things worked. Like selecting a model and using it. Well. That was also the case with OpenWebUI before ....
I went to integrate more of my infr... | 2025-07-14T10:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lzikqt/annoyed_with_librechat/ | Charming_Support726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzikqt | false | null | t3_1lzikqt | /r/LocalLLaMA/comments/1lzikqt/annoyed_with_librechat/ | false | false | self | 13 | null |
MI50 32GB with bios flash | 6 | I have been considering an MI50 32gb for a budget AI desktop for a while.
An issue I found was that it does not natively support display output.
But I found a version with the Radeon pro vii’s bios flashed onto it that should allow it to output to a display.
Another issue was that it doesn’t actually have a fan, re... | 2025-07-14T10:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lzijk2/mi50_32gb_with_bios_flash/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzijk2 | false | null | t3_1lzijk2 | /r/LocalLLaMA/comments/1lzijk2/mi50_32gb_with_bios_flash/ | false | false | self | 6 | null |
Looking for affordable dedicated GPUs (A100, H100) outside AWS? | 1 | [removed] | 2025-07-14T09:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lziaiz/looking_for_affordable_dedicated_gpus_a100_h100/ | Far_Association_6031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lziaiz | false | null | t3_1lziaiz | /r/LocalLLaMA/comments/1lziaiz/looking_for_affordable_dedicated_gpus_a100_h100/ | false | false | self | 1 | null |
Responses keep dissolving into word salad - how to stop it? | 20 | When I use LLMs for creative writing tasks, a lot of the time they can write a couple of hundred words just fine, but then sentences break down.
The screenshot shows a typical example of one going off the rails - there are proper sentences, then some barely readable James-Joyce-style stream of consciousness, then jus... | 2025-07-14T09:18:05 | Gilgameshcomputing | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzhqz8 | false | null | t3_1lzhqz8 | /r/LocalLLaMA/comments/1lzhqz8/responses_keep_dissolving_into_word_salad_how_to/ | false | false | default | 20 | {'enabled': True, 'images': [{'id': 'lr7kq1452tcf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/lr7kq1452tcf1.png?width=108&crop=smart&auto=webp&s=cc920b45a1223c1528565fd604812590fd688bc1', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/lr7kq1452tcf1.png?width=216&crop=smart&auto=web... | |
Comparison of latest reasoning models on the most recent LeetCode questions (Qwen-32B vs Qwen-235B vs nvidia-OpenCodeReasoning-32B vs Hunyuan-A13B) | 136 | **Testing method**
* For each question, four instances of the same model were run in parallel (i.e., best-of-4). If any of them successfully solved the question, the most optimized solution among them was selected.
* If none of the four produced a solution within the maximum context length, an additional four instanc... | 2025-07-14T09:12:20 | kyazoglu | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzhns3 | false | null | t3_1lzhns3 | /r/LocalLLaMA/comments/1lzhns3/comparison_of_latest_reasoning_models_on_the_most/ | false | false | 136 | {'enabled': True, 'images': [{'id': 'Yt8sdbd4WSl3QWw399ju3ntGhqCOHF8RdVFnkafe5Hs', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/nyu5vpzx2tcf1.png?width=108&crop=smart&auto=webp&s=4ba57d01c877fa0d1bec3f4aef3af9baaad55463', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/nyu5vpzx2tcf1.png?... | ||
local model for SQL Q&A + dashboard agent | 1 | I’m building a local AI agent system using n8n to handle technical SQL Q&A and dashboard generation based on database results — with tool execution via MCP Server.
My setup:
GPU: NVIDIA A10 (24GB VRAM)
So I’m limited to small to medium models (<=14B):
* llama3.1:8b-instruct-fp16
* qwen2.5:14b-instruct
Issue: Even... | 2025-07-14T09:08:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lzhlvb/local_model_for_sql_qa_dashboard_agent/ | Practical-Corgi-9906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzhlvb | false | null | t3_1lzhlvb | /r/LocalLLaMA/comments/1lzhlvb/local_model_for_sql_qa_dashboard_agent/ | false | false | self | 1 | null |
What best model(s) to use for inference using a 4090+3090 for Aider? | 1 | I am currently using Gemini 2.5 pro, and I seem to be using about $100 per month. I plan to increase the usage by 10 fold, so then I thought of using my 4090+3090 on open source models as a possibility cheaper alternative (and protect my assets). I'm currently testing Deep seek r1 70b and 8b. 70b takes a while, 8b seem... | 2025-07-14T08:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lzh0cf/what_best_models_to_use_for_inference_using_a/ | cGalaxy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzh0cf | false | null | t3_1lzh0cf | /r/LocalLLaMA/comments/1lzh0cf/what_best_models_to_use_for_inference_using_a/ | false | false | self | 1 | null |
Best way to run dockerized linux LLM server? | 0 | Hello!
I have a server on my network housing the RTX Pro 6000. I'd like to run a few models so that I can 1. Generate video (open to the interface used, but it seems like comfyui works well) and 2. Run a chat (likely with openwebui).
My question is, what is the most efficient way to run the models? Openllama? I prefe... | 2025-07-14T07:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lzggo2/best_way_to_run_dockerized_linux_llm_server/ | a_40oz_of_Mickeys | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzggo2 | false | null | t3_1lzggo2 | /r/LocalLLaMA/comments/1lzggo2/best_way_to_run_dockerized_linux_llm_server/ | false | false | self | 0 | null |
Stop-Sequences - Real World Use Cases | 1 | Do you have any good uses cases for using the stop-sequence functionality when calling the API?
List them below, please. | 2025-07-14T07:15:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lzfwdj/stopsequences_real_world_use_cases/ | Physical_Ad9040 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzfwdj | false | null | t3_1lzfwdj | /r/LocalLLaMA/comments/1lzfwdj/stopsequences_real_world_use_cases/ | false | false | self | 1 | null |
about LLM tools design | 0 | Regarding the design of tools, I want the LLM to generate files directly for the user. My current approach is:
Define a tool:
```
gen_file
args: {
file_name:
content:
append:
}
```
However, I now have a different perspective. Is it really reasonable to use `content` as an argument for a tool call? Do long tool c... | 2025-07-14T07:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lzfsxt/about_llm_tools_design/ | Dizzy-Meet-3258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzfsxt | false | null | t3_1lzfsxt | /r/LocalLLaMA/comments/1lzfsxt/about_llm_tools_design/ | false | false | self | 0 | null |
Apple “will seriously consider” buying Mistral | Bloomberg - Mark Gurman | 543 | [https://www.bloomberg.com/news/newsletters/2025-07-13/is-apple-going-to-replace-ceo-tim-cook-who-is-the-next-ceo-of-apple-ternus-md1mhrj4](https://www.bloomberg.com/news/newsletters/2025-07-13/is-apple-going-to-replace-ceo-tim-cook-who-is-the-next-ceo-of-apple-ternus-md1mhrj4) (paywall)
I don't know how the French an... | 2025-07-14T06:48:39 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lzfhhq | false | null | t3_1lzfhhq | /r/LocalLLaMA/comments/1lzfhhq/apple_will_seriously_consider_buying_mistral/ | false | false | 543 | {'enabled': True, 'images': [{'id': 'QahBk6E1a44oImzdfLzsg1gZn_nkm-ZxKpD4RbU9wnc', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/syyfccpldscf1.jpeg?width=108&crop=smart&auto=webp&s=89bbf9ebe8ad920a9decea85a04e8ddddce9143e', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/syyfccpldscf1.jp... | ||
Looking for affordable dedicated GPUs (A100, H100) outside AWS? | 0 | We’ve launched dedicated GPU clusters (India & US zones) with no waitlist. Mostly serving inference, fine-tuning, and SDXL use cases.
*A100 / H100 / L40S
*Hourly or monthly billing
*Accessible via REST or container
If anyone needs GPUs for open-source models, happy to offer test credits on [cyfuture.ai](https://cyfutur... | 2025-07-14T06:41:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lzfdiw/looking_for_affordable_dedicated_gpus_a100_h100/ | No_Trash_9030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzfdiw | false | null | t3_1lzfdiw | /r/LocalLLaMA/comments/1lzfdiw/looking_for_affordable_dedicated_gpus_a100_h100/ | false | false | self | 0 | null |
Xttsv2 model, Chatterbox on MacBook air 8 gb | 1 | I am trying to do voice dubbing but since I have started I am not being to achieve audible output... The videos are in English I transcrbe then in English then I translate the text in french, then when I try to get the traduced text to be read with the text to speech it gives me a bunch of gibberish, I am asking myself... | 2025-07-14T06:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lzf6zi/xttsv2_model_chatterbox_on_macbook_air_8_gb/ | Layonkizungu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzf6zi | false | null | t3_1lzf6zi | /r/LocalLLaMA/comments/1lzf6zi/xttsv2_model_chatterbox_on_macbook_air_8_gb/ | false | false | self | 1 | null |
Can VRAM be combined of 2 brands | 9 | Just starting into AI, ComfyUI. Using a 7900XTX 24GB. It goes not as smooth as I had hoped. Now I want to buy a nVidia GPU with 24GB.
Q: Can I only use the nVidia to compute and VRAM of both card combined? Do both card needs to have the same amount of VRAM? | 2025-07-14T05:20:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lze20x/can_vram_be_combined_of_2_brands/ | tonyleungnl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lze20x | false | null | t3_1lze20x | /r/LocalLLaMA/comments/1lze20x/can_vram_be_combined_of_2_brands/ | false | false | self | 9 | null |
Diffusion model support in llama.cpp. | 142 | I was browsing the llama.cpp PRs and saw that Am17an has added diffusion model support in llama.cpp. It works. It's very cool to watch it do it's thing. Make sure to use the --diffusion-visual flag. It's still a PR but has been approved so it should be merged soon. | 2025-07-14T05:20:04 | https://github.com/ggml-org/llama.cpp/pull/14644 | fallingdowndizzyvr | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lze1r3 | false | null | t3_1lze1r3 | /r/LocalLLaMA/comments/1lze1r3/diffusion_model_support_in_llamacpp/ | false | false | default | 142 | {'enabled': False, 'images': [{'id': 'X6RZ_QwBHXWQcBNsgWEp_Ow5ef9fjjqJddTY6M9a0cA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X6RZ_QwBHXWQcBNsgWEp_Ow5ef9fjjqJddTY6M9a0cA.png?width=108&crop=smart&auto=webp&s=f661fc69800730ffa673f5aa97b47d5b9e191899', 'width': 108}, {'height': 108, 'url': 'h... |
Practice Pytorch like Leetcode? (Also with cool LLM questions) | 18 | I created [**TorchLeet**](https://github.com/Exorust/TorchLeet)! It's a collection of PyTorch and LLM problems inspired by real convos with researchers, engineers, and interview prep.
It’s split into:
* **PyTorch Problems** (Basic → Hard): CNNs, RNNs, transformers, autograd, distributed training, explainability
* **L... | 2025-07-14T05:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lzdu0l/practice_pytorch_like_leetcode_also_with_cool_llm/ | exorust_fire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzdu0l | false | null | t3_1lzdu0l | /r/LocalLLaMA/comments/1lzdu0l/practice_pytorch_like_leetcode_also_with_cool_llm/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'nQQojR8L8YtcMGN-GeV05jPYf4tBYwES0RlHjNkNq40', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nQQojR8L8YtcMGN-GeV05jPYf4tBYwES0RlHjNkNq40.png?width=108&crop=smart&auto=webp&s=33bbaed8c03eedf7c1200615b0ca7a0d1815ce63', 'width': 108}, {'height': 108, 'url': 'h... |
There will be some frontend for FishSpeech? | 2 | I've been looking all over for a frontend to simplify the use of FishSpeech, but I haven't been able to find one.
I was wondering if anyone has found one ... | 2025-07-14T04:45:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lzdgc8/there_will_be_some_frontend_for_fishspeech/ | vk3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzdgc8 | false | null | t3_1lzdgc8 | /r/LocalLLaMA/comments/1lzdgc8/there_will_be_some_frontend_for_fishspeech/ | false | false | self | 2 | null |
Kimi-K2 is a DeepSeek V3 with more experts | 221 | Based their config.json, it is essentially a DeepSeekV3 with more experts (384 vs 256). Number of attention heads reduced from 128 to 64. Number of dense layers reduced from 3 to 1:
|Model|dense layer#|MoE layer#|shared|active/routed|Active|Params|Active%|fp16 kv@128k|kv%|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|DeepSeek-MoE... | 2025-07-14T04:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lzcuom/kimik2_is_a_deepseek_v3_with_more_experts/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzcuom | false | null | t3_1lzcuom | /r/LocalLLaMA/comments/1lzcuom/kimik2_is_a_deepseek_v3_with_more_experts/ | false | false | self | 221 | null |
Best local solution for tool calling? | 1 | I've been using claude code and gemini for coding but also for doing things on my local system like files and so on.
I want to use my local 5090 for speed but also for privacy.
I would need a reasoning model and tooling calling to handle at least the following tasks:
\* Read File
\* List files
\* Move File
\... | 2025-07-14T04:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lzcmep/best_local_solution_for_tool_calling/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzcmep | false | null | t3_1lzcmep | /r/LocalLLaMA/comments/1lzcmep/best_local_solution_for_tool_calling/ | false | false | self | 1 | null |
7/13 Update on UI/UX Benchmark: hit 25K, more models like Kimi K2, filter open source vs closed source | 1 | [removed] | 2025-07-14T03:51:29 | https://www.reddit.com/gallery/1lzcgbs | adviceguru25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lzcgbs | false | null | t3_1lzcgbs | /r/LocalLLaMA/comments/1lzcgbs/713_update_on_uiux_benchmark_hit_25k_more_models/ | false | false | 1 | null | |
7/13 Update on Design Arena: hit 25K, more models like Kimi K2, filter open source vs closed source | 1 | [removed] | 2025-07-14T03:48:59 | https://www.reddit.com/gallery/1lzceok | adviceguru25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lzceok | false | null | t3_1lzceok | /r/LocalLLaMA/comments/1lzceok/713_update_on_design_arena_hit_25k_more_models/ | false | false | 1 | null | |
What kind of rig would you build with a 5k budget for local LLM? | 0 | What would you build with that? does it give you something that is entry level, mid and top tier (consumer grade)
Or does it make sense to step up to 10k? where does the incremental benefit diminish significantly as the budget increases? | 2025-07-14T02:50:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lzbadq/what_kind_of_rig_would_you_build_with_a_5k_budget/ | songhaegyo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzbadq | false | null | t3_1lzbadq | /r/LocalLLaMA/comments/1lzbadq/what_kind_of_rig_would_you_build_with_a_5k_budget/ | false | false | self | 0 | null |
Building “Auto-Analyst” — A data analytics AI agentic system. LLM Agnostic can be used locally | 0 | 2025-07-14T02:50:01 | https://www.firebird-technologies.com/p/building-auto-analyst-a-data-analytics | phicreative1997 | firebird-technologies.com | 1970-01-01T00:00:00 | 0 | {} | 1lzbad8 | false | null | t3_1lzbad8 | /r/LocalLLaMA/comments/1lzbad8/building_autoanalyst_a_data_analytics_ai_agentic/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'vEddjlfKSwBkY0H7dWZ5csxyd4YIiVuMU7XBlW3xFQ0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/vEddjlfKSwBkY0H7dWZ5csxyd4YIiVuMU7XBlW3xFQ0.jpeg?width=108&crop=smart&auto=webp&s=a7b0b8070f32c6abef26fb7d071c276a90b8747b', 'width': 108}, {'height': 121, 'url': '... | |
Any Actual alternative to gpt-4o or claude? | 3 | I'm looking for something I can run locally that's actually close to gpt-4o or claude in terms of quality.
Kinda tight on money right now so I can't afford gpt plus or claude pro :/
I have to write a bunch of posts throughout the day, and the free gpt-4o hits its limit way too fast.
Is there anything similar out... | 2025-07-14T02:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lzb7fh/any_actual_alternative_to_gpt4o_or_claude/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzb7fh | false | null | t3_1lzb7fh | /r/LocalLLaMA/comments/1lzb7fh/any_actual_alternative_to_gpt4o_or_claude/ | false | false | self | 3 | null |
Fine-tuning / RL post training for tool calling | 2 | Has anyone read any good papers on RFT / RL techniques for finetuning "reasoning" models for tool calling? I'm really interested in learning more. I have read this paper
https://arxiv.org/html/2412.16849v1
-- but really don't have a good lay of the land regarding this space. | 2025-07-14T02:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lzb04f/finetuning_rl_post_training_for_tool_calling/ | soorg_nalyd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lzb04f | false | null | t3_1lzb04f | /r/LocalLLaMA/comments/1lzb04f/finetuning_rl_post_training_for_tool_calling/ | false | false | self | 2 | null |
Training an LLM only on books from the 1800's - no modern bias | 812 | Hi, im working on something that I havent seen anyone else do before, I trained nanoGPT on only books from a specifc time period and region of the world. I chose to do 1800-1850 London. My dataset was only 187mb (around 50 books). Right now the trained model produces random incoherent sentences but they do kind of feel... | 2025-07-14T02:16:53 | https://github.com/haykgrigo3/TimeCapsuleLLM | Remarkable-Trick-177 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lzampg | false | null | t3_1lzampg | /r/LocalLLaMA/comments/1lzampg/training_an_llm_only_on_books_from_the_1800s_no/ | false | false | 812 | {'enabled': False, 'images': [{'id': 'AiV4MAC3PG2xPq-j4g8nw6ZuH5_-LU4f7enDtlyUQUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AiV4MAC3PG2xPq-j4g8nw6ZuH5_-LU4f7enDtlyUQUo.png?width=108&crop=smart&auto=webp&s=d44946a1ed6e59d2f29fe66c42efbdf9beadf176', 'width': 108}, {'height': 108, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.