title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
China Launches Its First 6nm GPUs For Gaming & AI, the Lisuan 7G106 12 GB & 7G105 24 GB, Up To 24 TFLOPs, Faster Than RTX 4060 In Synthetic Benchmarks & Even Runs Black Myth Wukong at 4K High With Playable FPS | 336 | 2025-07-26T12:43:09 | https://wccftech.com/china-launches-first-6nm-gpus-gaming-ai-lisuan-7g106-12-gb-7g105-24-gb-faster-than-rtx-4060-black-myth-wukong-4k/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1m9sejp | false | null | t3_1m9sejp | /r/LocalLLaMA/comments/1m9sejp/china_launches_its_first_6nm_gpus_for_gaming_ai/ | false | false | default | 336 | {'enabled': False, 'images': [{'id': 'ndA4D3Bxcv5zL5g_5UzRsufAG8LnRSelzBStGecNkUc', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/ndA4D3Bxcv5zL5g_5UzRsufAG8LnRSelzBStGecNkUc.png?width=108&crop=smart&auto=webp&s=4fa57b5ade6205fa6dcb63d3ab64af600f2a30bb', 'width': 108}, {'height': 123, 'url': 'h... | |
New model on lmarena called summit? | 3 | I know zenith is allegedly an openai or kimi model, but I've not found anything about summit? | 2025-07-26T12:37:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m9saif/new_model_on_lmarena_called_summit/ | Hereitisguys9888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9saif | false | null | t3_1m9saif | /r/LocalLLaMA/comments/1m9saif/new_model_on_lmarena_called_summit/ | false | false | self | 3 | null |
I don’t have a compatible cpu to run it locally | 0 | But it works for me still. I am wondering what the difference is between one that is “compatible” or not? | 2025-07-26T12:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m9s4dz/i_dont_have_a_compatible_cpu_to_run_it_locally/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9s4dz | false | null | t3_1m9s4dz | /r/LocalLLaMA/comments/1m9s4dz/i_dont_have_a_compatible_cpu_to_run_it_locally/ | false | false | self | 0 | null |
Qwen's Wan 2.2 is coming soon | 437 | Demo of Video & Image Generation Model Wan 2.2: https://x.com/Alibaba_Wan/status/1948436898965586297?t=mUt2wu38SSM4q77WDHjh2w&s=19 | 2025-07-26T12:26:56 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9s2nt | false | null | t3_1m9s2nt | /r/LocalLLaMA/comments/1m9s2nt/qwens_wan_22_is_coming_soon/ | false | false | 437 | {'enabled': True, 'images': [{'id': 'Xqg6uFhh27I2vmCZoMI0hRqtO8XTqhzv8rJWvsIB-IQ', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/mtc9shncp7ff1.jpeg?width=108&crop=smart&auto=webp&s=b7350719926edf7a7fd256eaf4a9562c633f266e', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/mtc9shncp7ff1.j... | ||
Yann Lecun being sidelined at Meta. RIP for open weight models | 0 | Alexandr Wang appointing new chief AI scientist and pushing for closed source and closed weights models
| 2025-07-26T12:24:26 | NeedleworkerDull7886 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9s0zk | false | null | t3_1m9s0zk | /r/LocalLLaMA/comments/1m9s0zk/yann_lecun_being_sidelined_at_meta_rip_for_open/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '7su5jpowo7ff1', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/7su5jpowo7ff1.jpeg?width=108&crop=smart&auto=webp&s=286e47eb2f5107928dbb1c3c407da3ae0ffc9906', 'width': 108}, {'height': 416, 'url': 'https://preview.redd.it/7su5jpowo7ff1.jpeg?width=216&crop=smart&auto=... | |
Me after getting excited by a new model release and checking on Hugging Face if I can run it locally. | 819 | 2025-07-26T12:09:41 | alew3 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9rqxa | false | null | t3_1m9rqxa | /r/LocalLLaMA/comments/1m9rqxa/me_after_getting_excited_by_a_new_model_release/ | false | false | default | 819 | {'enabled': True, 'images': [{'id': '0tnbd1i9m7ff1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/0tnbd1i9m7ff1.png?width=108&crop=smart&auto=webp&s=6793b3d09acfa0d71fd64ec0893a75e4685dc3e5', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/0tnbd1i9m7ff1.png?width=216&crop=smart&auto=web... | ||
When picking the model for production use, what criteria do you use? | 2 | I mostly compared model with 3-4 benchmark, MMLU, MMLU Pro, GPQA, --> for determine it knowledge. IFEval --> to determine if it can follow instruction well (is it help to detemine structure output generation? let me know)
The reason is that these is the most tested benchmark, it appear a lot more time than another be... | 2025-07-26T12:07:25 | https://www.reddit.com/r/LocalLLaMA/comments/1m9rpgf/when_picking_the_model_for_production_use_what/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9rpgf | false | null | t3_1m9rpgf | /r/LocalLLaMA/comments/1m9rpgf/when_picking_the_model_for_production_use_what/ | false | false | self | 2 | null |
Need help understanding GPU VRAM pooling – can I combine VRAM across GPUs? | 3 | So I know GPUs can be “connected” (like via NVLink or just multiple GPUs in one system), but can their **VRAM** be **combined**?
Here’s my use case: I have two GTX 1060 6GB cards, and theoretically together they give me 12GB of VRAM.
**Question** – can I run a model (like an LLM or SDXL) that requires more than 6... | 2025-07-26T12:03:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m9rmry/need_help_understanding_gpu_vram_pooling_can_i/ | Recent-Bother5388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9rmry | false | null | t3_1m9rmry | /r/LocalLLaMA/comments/1m9rmry/need_help_understanding_gpu_vram_pooling_can_i/ | false | false | self | 3 | null |
Tips for improving my ollama setup? - Ryzen 5 3600/ RTX 3060 12GB VRAM / 64 GB RAM - Qwen3-30B-A3B | 0 | Hi LLM Folks,
TL/DR: I'm seeking tips for improving my ollama setup with Qwen3, deepseek and nomic-embed for home sized LLM instance.
I'm in the LLM game for a couple of weeks now and still learning something new every day. I have an ollama instance on my Ryzen workstation running Debian and control it with a Leno... | 2025-07-26T11:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m9rhgf/tips_for_improving_my_ollama_setup_ryzen_5_3600/ | Speedy-Wonder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9rhgf | false | null | t3_1m9rhgf | /r/LocalLLaMA/comments/1m9rhgf/tips_for_improving_my_ollama_setup_ryzen_5_3600/ | false | false | self | 0 | null |
Qwen 3 235B A22B Instruct 2507 shows that non-thinking models can be great at reasoning as well | 116 | https://livebench.ai/#/?Reasoning=as | 2025-07-26T11:48:11 | Balance- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9rcg2 | false | null | t3_1m9rcg2 | /r/LocalLLaMA/comments/1m9rcg2/qwen_3_235b_a22b_instruct_2507_shows_that/ | false | false | default | 116 | {'enabled': True, 'images': [{'id': 'l0xpzivfi7ff1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/l0xpzivfi7ff1.jpeg?width=108&crop=smart&auto=webp&s=a79dc4aae316a4b1c5bac9214638942aa59adbe5', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/l0xpzivfi7ff1.jpeg?width=216&crop=smart&auto=w... | |
inclusionAI/Ming-Lite-Omni-1.5 (20B-A3B) | 74 | 2025-07-26T11:37:21 | https://huggingface.co/inclusionAI/Ming-Lite-Omni-1.5 | nullmove | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m9r5gb | false | null | t3_1m9r5gb | /r/LocalLLaMA/comments/1m9r5gb/inclusionaimingliteomni15_20ba3b/ | false | false | default | 74 | {'enabled': False, 'images': [{'id': 'kZ6CPulsiZVEkuZbgsg3cE-lOoTedyZKjDbNCwJ-EdU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kZ6CPulsiZVEkuZbgsg3cE-lOoTedyZKjDbNCwJ-EdU.png?width=108&crop=smart&auto=webp&s=ed71e7fb6260e756d34c5f3d89d86364dcdc1a0f', 'width': 108}, {'height': 116, 'url': 'h... | |
Has Anyone been able to generate multimodal embedddings using Visualized_BGE? | 2 | I am taking help from this
[https://milvus.io/docs/multimodal\_rag\_with\_milvus.md](https://milvus.io/docs/multimodal_rag_with_milvus.md)
But the line *from FlagEmbedding.visual.modeling import Visualized\_BGE* is not working.
Any suggestions? | 2025-07-26T11:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m9qtco/has_anyone_been_able_to_generate_multimodal/ | IndependentTough5729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9qtco | false | null | t3_1m9qtco | /r/LocalLLaMA/comments/1m9qtco/has_anyone_been_able_to_generate_multimodal/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=108&crop=smart&auto=webp&s=158fcd61955ed3f82f8bdccf6dcfca497a8fb0fb', 'width': 108}, {'height': 112, 'url': 'h... |
Think tags missing in Qwen3-235B-A22B-Thinking-2507 | 6 | It seems the updated model doesn’t enclose thinking in <think></think> tags.
Which means you can’t collapse thinking window in gui apps like LM studio. | 2025-07-26T11:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m9qt65/think_tags_missing_in_qwen3235ba22bthinking2507/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9qt65 | false | null | t3_1m9qt65 | /r/LocalLLaMA/comments/1m9qt65/think_tags_missing_in_qwen3235ba22bthinking2507/ | false | false | self | 6 | null |
I love Google Gemini but | 0 | I always use Gemini but I like to have some fun for a minute and did this. Lol. I logically compared it to other leading open weight llm and let it say that because in the first try it denied it so I felt some anger lol. | 2025-07-26T11:05:05 | darkpigvirus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9qlpv | false | null | t3_1m9qlpv | /r/LocalLLaMA/comments/1m9qlpv/i_love_google_gemini_but/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '8gejtouqa7ff1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8gejtouqa7ff1.jpeg?width=108&crop=smart&auto=webp&s=42d737d848583a662bb8c80f9e4abfc31c479180', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8gejtouqa7ff1.jpeg?width=216&crop=smart&auto=... | |
Cluster idea for MoE | 0 | Here is a crazy idea and I am wondering if it might work. My LLM thinks it will :-)
The idea is to have a shared server with GPU and up to 8 expert servers. Those would be physical servers each with a dedicated 100 Gbps link to the shared server. The shared server could be with Nvidia 5090 and the expert servers could... | 2025-07-26T10:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m9q92z/cluster_idea_for_moe/ | Baldur-Norddahl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9q92z | false | null | t3_1m9q92z | /r/LocalLLaMA/comments/1m9q92z/cluster_idea_for_moe/ | false | false | self | 0 | null |
Has anyone created a table of collated benchmark results of many LLMs | 4 | There have been many models released this year already and have lost track of which models are better and for what.
**Does anyone have some resource or spreadsheet that collates the results of many models on many benchmarks?**
I'm slightly more interested in open-weights model results, but I think it's important to h... | 2025-07-26T09:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m9p9kg/has_anyone_created_a_table_of_collated_benchmark/ | JawGBoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9p9kg | false | null | t3_1m9p9kg | /r/LocalLLaMA/comments/1m9p9kg/has_anyone_created_a_table_of_collated_benchmark/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '4mltMPuoIx8FMW1Y60sN3h17rSzmD3D6Q8bt5cARpzY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4mltMPuoIx8FMW1Y60sN3h17rSzmD3D6Q8bt5cARpzY.png?width=108&crop=smart&auto=webp&s=01b76279937283a6f65fb2f22bacafcfd57327f0', 'width': 108}, {'height': 116, 'url': 'h... |
Merged Lora adaptor Model Giving Gibberish as response. Using Llama 3.2 3B instruct. Dataset trained on Nebius Ai studio. What to do? | 4 | I have a small dataset which I had trained on Nebius Ai studio and downloaded the files. I then merged the model Llama 3.2-3B instruct and lora adaptor for it. And then when I coverted it in GGUF and loaded on kobaldcpp for test, it giving me this. I am new to all this so if anyone need more information to know the err... | 2025-07-26T09:35:58 | rihuwamidori | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9p7bb | false | null | t3_1m9p7bb | /r/LocalLLaMA/comments/1m9p7bb/merged_lora_adaptor_model_giving_gibberish_as/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'oj9iadphu6ff1', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/oj9iadphu6ff1.png?width=108&crop=smart&auto=webp&s=a9204efaa1a75ec8eb30cd5a6b003e3499b37518', 'width': 108}, {'height': 64, 'url': 'https://preview.redd.it/oj9iadphu6ff1.png?width=216&crop=smart&auto=webp... | |
LLM (esp. MoE) inference profiling : is it a thing and if not, why not ? | 2 | I was thinking about what to offload with --override-tensor and was thinking that instead of guessing, measuring would be best.
For MoE, I presume that all non shared experts don't have the same odds of activation for a given specific task / corpus. To optimize program compilation, one can instrument the generated cod... | 2025-07-26T08:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m9nyk4/llm_esp_moe_inference_profiling_is_it_a_thing_and/ | un_passant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9nyk4 | false | null | t3_1m9nyk4 | /r/LocalLLaMA/comments/1m9nyk4/llm_esp_moe_inference_profiling_is_it_a_thing_and/ | false | false | self | 2 | null |
Best way to manage context/notes locally for API usage while optimizing token costs? | 1 | trying to optimize how i load relevant context into new chats (mostly claude api). currently have hundreds of structured documents/notes but manual selection is getting inefficient.
current workflow: manually pick relevant docs > paste into new conversation > often end up with redundant context or miss relevant stuff ... | 2025-07-26T08:08:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m9nwk7/best_way_to_manage_contextnotes_locally_for_api/ | boomerdaycare | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9nwk7 | false | null | t3_1m9nwk7 | /r/LocalLLaMA/comments/1m9nwk7/best_way_to_manage_contextnotes_locally_for_api/ | false | false | self | 1 | null |
Thoughts on Qwen3 235B A22B Instruct 2507? | 31 | I've been using the model (at FP8) for the past few days and it feels pretty solid for discussing ideas with and for using it as a code agent (I mostly use Qwen's CLI).
Has anyone else been using this model recently? If you have, do you think it's decent for its size or are there better options? | 2025-07-26T08:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m9nu0j/thoughts_on_qwen3_235b_a22b_instruct_2507/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9nu0j | false | null | t3_1m9nu0j | /r/LocalLLaMA/comments/1m9nu0j/thoughts_on_qwen3_235b_a22b_instruct_2507/ | false | false | self | 31 | null |
How can I leverage my 14b qwen & qwen coder models effectively? | 0 | I want to leverage my local llm to use it more effectively & less dependancy on online llms.
I am currently using it for automatic social media content generation & posting.
Currently I am still using online models for research, deep research & coding.
But via use of tools and mcp effectively 14b model can do more.
... | 2025-07-26T06:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m9mp67/how_can_i_leverage_my_14b_qwen_qwen_coder_models/ | InsideResolve4517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9mp67 | false | null | t3_1m9mp67 | /r/LocalLLaMA/comments/1m9mp67/how_can_i_leverage_my_14b_qwen_qwen_coder_models/ | false | false | self | 0 | null |
Prompt processing on two gpus with a large difference in power | 1 | [removed] | 2025-07-26T06:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m9mlq4/prompt_processing_on_two_gpus_with_a_large/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9mlq4 | false | null | t3_1m9mlq4 | /r/LocalLLaMA/comments/1m9mlq4/prompt_processing_on_two_gpus_with_a_large/ | false | false | self | 1 | null |
webbigdata/VoiceCore: Japanese voice version of canopylabs/orpheus-tts | 23 | I'd like to introduce a high-quality Japanese version of TTS that I've created through continuous pre-learning and post-training with orpheus.
[https://huggingface.co/webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)
Findings for those who are trying to create TTS in languages other than English
I t... | 2025-07-26T06:44:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m9ml0y/webbigdatavoicecore_japanese_voice_version_of/ | dahara111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9ml0y | false | null | t3_1m9ml0y | /r/LocalLLaMA/comments/1m9ml0y/webbigdatavoicecore_japanese_voice_version_of/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'uFYNIoBfPPAfuk5C31nO4L2HdlD3pLd1JhqRMfIL6NQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uFYNIoBfPPAfuk5C31nO4L2HdlD3pLd1JhqRMfIL6NQ.png?width=108&crop=smart&auto=webp&s=8af70d876318a37e56084cd5d2f690d937ab70c1', 'width': 108}, {'height': 116, 'url': 'h... |
Why isn't/Is there a natural language search interface for Everything from void tools? | 2 | Windows would be unusable for me without everything. I have over a hundred terabytes of data which I search in an instant using this tool everyday, across multiple nases, and I've yet found anything that can rival everything even on mac or linux.
But I just wish there was an llm implementation which can take this func... | 2025-07-26T06:31:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m9mdtj/why_isntis_there_a_natural_language_search/ | CystralSkye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9mdtj | false | null | t3_1m9mdtj | /r/LocalLLaMA/comments/1m9mdtj/why_isntis_there_a_natural_language_search/ | false | false | self | 2 | null |
Intern S1 released | 209 | 2025-07-26T06:22:44 | https://huggingface.co/internlm/Intern-S1 | kristaller486 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m9m8gw | false | null | t3_1m9m8gw | /r/LocalLLaMA/comments/1m9m8gw/intern_s1_released/ | false | false | default | 209 | {'enabled': False, 'images': [{'id': 'KDOV-l-4x9DaDOy6Wn8A2D83piXwjocjNoMmig3HZJc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KDOV-l-4x9DaDOy6Wn8A2D83piXwjocjNoMmig3HZJc.png?width=108&crop=smart&auto=webp&s=2f6f29e5e55376adfc2f0755333ec7f296f87f98', 'width': 108}, {'height': 116, 'url': 'h... | |
We discovered an approach to train any AI agent with RL, with (almost) zero code changes. | 128 | Hey r/LocalLLaMA,
My team and I, like many of you, have been deep in the agent-building rabbit hole. It's one thing to build a cool proof-of-concept with a framework like LangGraph. It's a completely different beast to make that agent actually *learn* and get better over time.
We got tired of the friction, so we star... | 2025-07-26T06:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m9m670/we_discovered_an_approach_to_train_any_ai_agent/ | matluster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9m670 | false | null | t3_1m9m670 | /r/LocalLLaMA/comments/1m9m670/we_discovered_an_approach_to_train_any_ai_agent/ | false | false | 128 | {'enabled': False, 'images': [{'id': 'Qb-FyRzMVnNh5wmBlbGJQmNh976iEvgFgQ1wpwkFR3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qb-FyRzMVnNh5wmBlbGJQmNh976iEvgFgQ1wpwkFR3U.png?width=108&crop=smart&auto=webp&s=57900143d22c1c24bc7123fead5fbf0b41f12a0b', 'width': 108}, {'height': 108, 'url': 'h... | |
Newbie Thought: Why Isn’t There a “CivitAI for Local LLM Assistants”? | 3 | So I’m still new to the local LLM rabbit hole (finally getting my footing), but something keeps bugging me.
With diffusion models, we’ve got CivitAI — clean galleries, LoRAs, prompts, styles, full user setups, all sorted and shareable. But with local LLMs… where’s the equivalent?
I keep seeing awesome threads about p... | 2025-07-26T04:32:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m9kc7c/newbie_thought_why_isnt_there_a_civitai_for_local/ | dedreo58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9kc7c | false | null | t3_1m9kc7c | /r/LocalLLaMA/comments/1m9kc7c/newbie_thought_why_isnt_there_a_civitai_for_local/ | false | false | self | 3 | null |
Which nocode tools would you suggest for this setup? Additionally is there a good guide on how to setup chat interface in angular/react with open AI? | 0 | Setup
* Frontend: Angular or React
* Backend: Django + celery + redis
* Database: postgresql
* IDE: Cursor
* UI/UX: Lovable
* Hosting: Azure or AWS ( based on connivence )
* Docker, Gitlab
* LLM: OpenAI | 2025-07-26T04:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m9js7n/which_nocode_tools_would_you_suggest_for_this/ | Notalabel_4566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9js7n | false | null | t3_1m9js7n | /r/LocalLLaMA/comments/1m9js7n/which_nocode_tools_would_you_suggest_for_this/ | false | false | self | 0 | null |
16Gb vram python coder | 3 | What is my current best choice for running a LLM that can write python code for me?
Only got a 5070 TI 16GB VRAM | 2025-07-26T03:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m9jkm4/16gb_vram_python_coder/ | Galahad56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9jkm4 | false | null | t3_1m9jkm4 | /r/LocalLLaMA/comments/1m9jkm4/16gb_vram_python_coder/ | false | false | self | 3 | null |
qwen3-30b-a3b has fallen into infinite consent for function calling | 4 | > 1. first scene: function calling by `openai/gpt-4o-mini`, and immidiately succeeded
> 2. second scene: function calling by `qwen3/qwen3-30b-a3b`, but failing
Trying to function calling to the `qwen3-30b-a3b` model with OpenAI SDK, but fallen into infinite consent for the function calling.
It seems like that rather ... | 2025-07-26T03:49:32 | https://v.redd.it/e4ctqouo05ff1 | jhnam88 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9jjh3 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/e4ctqouo05ff1/DASHPlaylist.mpd?a=1756093785%2CZDlhNjVmYzM5MDg0NTg4YjcyMzEzMTU1YzdmNTBlOTRlOTQ0YzVkNGYyOGJkYjI1Mzk5NjQ3YTExOTdkYmM3OA%3D%3D&v=1&f=sd', 'duration': 285, 'fallback_url': 'https://v.redd.it/e4ctqouo05ff1/DASH_480.mp4?source=fallback', 'h... | t3_1m9jjh3 | /r/LocalLLaMA/comments/1m9jjh3/qwen330ba3b_has_fallen_into_infinite_consent_for/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'dHRobXpvdW8wNWZmMXAdkagH1RH-Rofv-tT1TDOd9yMWaQux6xCPhEcnytZW', 'resolutions': [{'height': 197, 'url': 'https://external-preview.redd.it/dHRobXpvdW8wNWZmMXAdkagH1RH-Rofv-tT1TDOd9yMWaQux6xCPhEcnytZW.png?width=108&crop=smart&format=pjpg&auto=webp&s=512b79bc41a12be0f6908d9df928bdf9cb57... | |
Question on MOE expert swapping | 0 | Even if one expert cluster(?) active set is only 23 to 35 GB's based on two recent one's I've seen what might the working set be in terms of number of expert needed and how often would swapping happen? I'm looking at MOE up over 230B in size. If I'm writing python web server, the javascript/html/css side, stable diff... | 2025-07-26T03:22:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m9j1mh/question_on_moe_expert_swapping/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9j1mh | false | null | t3_1m9j1mh | /r/LocalLLaMA/comments/1m9j1mh/question_on_moe_expert_swapping/ | false | false | self | 0 | null |
Langfuse- Clarification Needed: RBAC Features in Open Source vs Enterprise Edition | 1 | Our team is evaluating Langfuse for production use with multiple clients, and we need clear clarification on which RBAC (Role-Based Access Control) features are included in the MIT licensed open source version versus what requires an Enterprise license.
Team members are arguing whether RBAC requires Enterprise license... | 2025-07-26T03:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m9j02q/langfuse_clarification_needed_rbac_features_in/ | SuitableMushroom6767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9j02q | false | null | t3_1m9j02q | /r/LocalLLaMA/comments/1m9j02q/langfuse_clarification_needed_rbac_features_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'd5P0bIzzQrrGsJJnmQ2gYK6M4pX7tG0yjbunsQN1TOI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/d5P0bIzzQrrGsJJnmQ2gYK6M4pX7tG0yjbunsQN1TOI.png?width=108&crop=smart&auto=webp&s=f5933305a7f87e8b68f406b8678ee86288526ed5', 'width': 108}, {'height': 216, 'url': '... |
My 7985WX, dual 5090's, and 256GB's of DDR5-6000 has landed. | 13 | I was told trying to run non-tiny LLM's on a CPU was unusable. But I got 8.3 token/sec for qwen2.5-coder-32b-instruct Q8 without using the GPU. 38.6 tokens/sec using both 5090's. Note, I'm getting barely 48% processing usage on the 5090's and wondering what I can do to improve that.
Llama.cpp thread affinity seems ... | 2025-07-26T03:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m9itnz/my_7985wx_dual_5090s_and_256gbs_of_ddr56000_has/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9itnz | false | null | t3_1m9itnz | /r/LocalLLaMA/comments/1m9itnz/my_7985wx_dual_5090s_and_256gbs_of_ddr56000_has/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'Dk44WxUWzwyFVy1Yr9Co1zKqM1MqmFo8Qo97aSXpZNs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Dk44WxUWzwyFVy1Yr9Co1zKqM1MqmFo8Qo97aSXpZNs.png?width=108&crop=smart&auto=webp&s=45e7c8c14055c57d8c62dad0b150faa3212ce087', 'width': 108}, {'height': 116, 'url': 'h... |
A demo of long running LLM agent solution with state persistent. | 0 | Hi guys, I built this solution to ensure your AI agent to remain stateful and long running. When your agent crashed, Agentainer will auto recover it and your agent can pick up what left to do and continue from there.
Appreciate for any feedback, good or bad are both welcome!
[Agentainer demo](https://reddit.com/link... | 2025-07-26T02:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m9ia1t/a_demo_of_long_running_llm_agent_solution_with/ | Tradingoso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9ia1t | false | null | t3_1m9ia1t | /r/LocalLLaMA/comments/1m9ia1t/a_demo_of_long_running_llm_agent_solution_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 's9XzlMF_cDm-NOLznql2db8w7vd9IkAxkKbNyxaBqUY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s9XzlMF_cDm-NOLznql2db8w7vd9IkAxkKbNyxaBqUY.png?width=108&crop=smart&auto=webp&s=7e1fe9df371f9708d871f7113cc6aa4cdd3fc7a0', 'width': 108}, {'height': 108, 'url': 'h... |
There has been a lot of efforts in the past to improve quantization due to the size of dense models… are we likely to see improvements like pruning and/or distillation with the uprise of huge MoEs? | 17 | It seems much effort was spent to improve quantization by the community trying to fit a dense model in VRAM so it didn’t tick along at 2 tokens a second. Many even bought multiple cards to have more VRAM.
Now many new models are MoEs, where the average Joe sits hopelessly at his computer with a couple of consumer car... | 2025-07-26T02:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m9hycx/there_has_been_a_lot_of_efforts_in_the_past_to/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9hycx | false | null | t3_1m9hycx | /r/LocalLLaMA/comments/1m9hycx/there_has_been_a_lot_of_efforts_in_the_past_to/ | false | false | self | 17 | null |
There's a new Kimi model on lmarena called Zenith and it's really really good. It might be Kimi K2 with reasoning | 82 | 2025-07-26T02:12:06 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9holp | false | null | t3_1m9holp | /r/LocalLLaMA/comments/1m9holp/theres_a_new_kimi_model_on_lmarena_called_zenith/ | false | false | default | 82 | {'enabled': True, 'images': [{'id': '4rtvhn7mn4ff1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/4rtvhn7mn4ff1.jpeg?width=108&crop=smart&auto=webp&s=6c91a40e7d4e46be8e7afdf9a2ec643fc2e6b539', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/4rtvhn7mn4ff1.jpeg?width=216&crop=smart&auto=... | ||
XXX chat bot -- how are they doing it? | 1 | Hey everyone -- instagram's 'naughty neighbor' AI chat bots are pretty incredible.
I just tried recreating on with ollama and openweb-UI but it was really really slow and not a female chat bot -- just kinda a massive dictionary.
Is it possible to recreate this instagram chat bot experience from a selfhosted level?... | 2025-07-26T01:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m9h9b0/xxx_chat_bot_how_are_they_doing_it/ | naffhouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9h9b0 | false | null | t3_1m9h9b0 | /r/LocalLLaMA/comments/1m9h9b0/xxx_chat_bot_how_are_they_doing_it/ | false | false | self | 1 | null |
Nvidia released Llama Nemotron Super v1.5 | 154 | 📣 Announcing Llama Nemotron Super v1.5 📣
This release pushes the boundaries of reasoning model capabilities at the weight class of the model and is ready to power agentic applications from individual developers, all the way to enterprise applications.
📈 The Llama Nemotron Super v1.5 achieves leading reasoning acc... | 2025-07-26T01:36:28 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9gzl7 | false | null | t3_1m9gzl7 | /r/LocalLLaMA/comments/1m9gzl7/nvidia_released_llama_nemotron_super_v15/ | false | false | default | 154 | {'enabled': True, 'images': [{'id': 'yl29obvah4ff1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/yl29obvah4ff1.jpeg?width=108&crop=smart&auto=webp&s=8c9b0f8c1eca8a47c7d775c89f5c127aa3db8d6f', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/yl29obvah4ff1.jpeg?width=216&crop=smart&auto=we... | |
AMD ROCm 7 Installation & Test Guide / Fedora Linux RX 9070 - ComfyUI Blender LMStudio SDNext Flux | 4 | 2025-07-26T01:26:06 | https://youtube.com/watch?v=7qDlHpeTmC0&si=abStnvRLk3lAT1FW | B4rr3l | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1m9gs61 | false | {'oembed': {'author_name': 'Open Game and Development', 'author_url': 'https://www.youtube.com/@OpenGameDev', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7qDlHpeTmC0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-m... | t3_1m9gs61 | /r/LocalLLaMA/comments/1m9gs61/amd_rocm_7_installation_test_guide_fedora_linux/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'H1S_ghyDQg-VMEV8JXw6B9oG5nmC_pqvozEXErkEtwg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/H1S_ghyDQg-VMEV8JXw6B9oG5nmC_pqvozEXErkEtwg.jpeg?width=108&crop=smart&auto=webp&s=4a1c36e02a84367d85741b43e8eacc884b42ec24', 'width': 108}, {'height': 162, 'url': '... | |
If you’re experimenting with Qwen3-Coder, we just launched a Turbo version on DeepInfra | 0 | ⚡ 2× faster
💸 $0.30 / $1.20 per Mtoken
✅ Nearly identical performance (\~1% delta)
Perfect for agentic workflows, tool use, and browser tasks.
Also, if you’re deploying open models or curious about real-time usage at scale, we just started [r/DeepInfra](https://www.reddit.com/r/DeepInfra) to track new model launch... | 2025-07-26T01:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m9gg6j/if_youre_experimenting_with_qwen3coder_we_just/ | deepinfra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9gg6j | false | null | t3_1m9gg6j | /r/LocalLLaMA/comments/1m9gg6j/if_youre_experimenting_with_qwen3coder_we_just/ | false | false | self | 0 | null |
GLM-4.5-9B? | 55 | With the release of GLM-4.5 and GLM-4.5-Air (both large MoE models), Zhipu has mentioned that they are also considering upgrading their 9B model if there’s enough community interest in a small model.
This potential small model would be much more accessible than the planned GLM-4.5 models which would likely be far too ... | 2025-07-26T00:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m9fuf9/glm459b/ | mrfakename0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9fuf9 | false | null | t3_1m9fuf9 | /r/LocalLLaMA/comments/1m9fuf9/glm459b/ | false | false | self | 55 | null |
Llama 3.3 Nemotron Super 49B v1.5 | 245 | 2025-07-26T00:15:19 | https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1_5 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m9fb5t | false | null | t3_1m9fb5t | /r/LocalLLaMA/comments/1m9fb5t/llama_33_nemotron_super_49b_v15/ | false | false | default | 245 | {'enabled': False, 'images': [{'id': 'Dk44WxUWzwyFVy1Yr9Co1zKqM1MqmFo8Qo97aSXpZNs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Dk44WxUWzwyFVy1Yr9Co1zKqM1MqmFo8Qo97aSXpZNs.png?width=108&crop=smart&auto=webp&s=45e7c8c14055c57d8c62dad0b150faa3212ce087', 'width': 108}, {'height': 116, 'url': 'h... | |
I built a calculator that shows the real energy & carbon cost of LLM inference | 0 | 2025-07-26T00:11:00 | https://www.calculeai.com/ | TerrificMist | calculeai.com | 1970-01-01T00:00:00 | 0 | {} | 1m9f7v8 | false | null | t3_1m9f7v8 | /r/LocalLLaMA/comments/1m9f7v8/i_built_a_calculator_that_shows_the_real_energy/ | false | false | default | 0 | null | |
Reka AI models support in uzu engine | 56 | Hey, recently we support reka’s ai models in uzu engine. Pretty nice model. It shows good performance across all tasks and truly open source. I was able to get almost 16 t/s on my Mac studio with Ultra chip. Highly recommend to try. | 2025-07-26T00:10:41 | https://www.reddit.com/gallery/1m9f7lq | darkolorin | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m9f7lq | false | null | t3_1m9f7lq | /r/LocalLLaMA/comments/1m9f7lq/reka_ai_models_support_in_uzu_engine/ | false | false | 56 | null | |
Local LLMs I have been using, through different two backends, seem to hardly use GPU | 1 | I have a 3060 RTX for my i7 PC. I check the task manager it is has been using about 75% CPU, 55% RAM, and GPU 1% (although it will jump up to 48% and then plummet back to 1% after about a second. I have used Ooba and Kobold.ccp which use the llama.ccp server and kobold.ccp (of course) respectively. I have tried playing... | 2025-07-25T23:52:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m9etng/local_llms_i_have_been_using_through_different/ | theshadowraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9etng | false | null | t3_1m9etng | /r/LocalLLaMA/comments/1m9etng/local_llms_i_have_been_using_through_different/ | false | false | self | 1 | null |
China's ByteDance's coze studio is now open source | 132 | 2025-07-25T23:45:24 | https://github.com/coze-dev/coze-studio | Fun-Doctor6855 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m9enpd | false | null | t3_1m9enpd | /r/LocalLLaMA/comments/1m9enpd/chinas_bytedances_coze_studio_is_now_open_source/ | false | false | default | 132 | {'enabled': False, 'images': [{'id': 'Mo-pW95aHUUItyNealwmLfsGlo6U3Y4QzRLyAXEeZ2Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mo-pW95aHUUItyNealwmLfsGlo6U3Y4QzRLyAXEeZ2Y.png?width=108&crop=smart&auto=webp&s=f1bebfe8068f1cdc71db48800cd8cedda3dc840b', 'width': 108}, {'height': 108, 'url': 'h... | |
📣 Announcing Llama Nemotron Super v1.5 to Build More Accurate and Efficient AI Agents | 1 | 📣 Announcing Llama Nemotron Super v1.5 📣 This release pushes the boundaries of reasoning model capabilities at the weight class of the model and is ready to power agentic applications from individual developers, all the way to enterprise applications.
📈 The Llama Nemotron Super v1.5 achieves leading reasoning accur... | 2025-07-25T23:39:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m9ej1i/announcing_llama_nemotron_super_v15_to_build_more/ | PDXcoder2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9ej1i | false | null | t3_1m9ej1i | /r/LocalLLaMA/comments/1m9ej1i/announcing_llama_nemotron_super_v15_to_build_more/ | false | false | 1 | null | |
Anyone stitched together real-time local AI for webcam + voice feedback? | 1 | A friend’s messing with the idea of setting up a camera in his garage gym to watch his lifts, give form feedback, count reps, maybe even talk to him in real time.
Needs to be actually real-time tho, like not 5s delay, and ideally configurable too.
Anyone know what models or pipelines would work best for this? Thinkin... | 2025-07-25T23:36:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m9egm9/anyone_stitched_together_realtime_local_ai_for/ | Weary-Wing-6806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9egm9 | false | null | t3_1m9egm9 | /r/LocalLLaMA/comments/1m9egm9/anyone_stitched_together_realtime_local_ai_for/ | false | false | self | 1 | null |
App for voice interaction with LocalLLaMA. Looking for help/app/model etc. | 2 | Hi All,
I have been self hosting Ollama and mostly just use it to throw random questions or helping me dumb down a complex topic to answer a question my daughter asks.
The one thing I love about ChatGPT/Gemini is the ability to voice chat back and forth.
Is there a easy to use mobile/desktop app and model combo that... | 2025-07-25T23:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m9e71s/app_for_voice_interaction_with_localllama_looking/ | Dark_Mesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9e71s | false | null | t3_1m9e71s | /r/LocalLLaMA/comments/1m9e71s/app_for_voice_interaction_with_localllama_looking/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=108&crop=smart&auto=webp&s=279a09b67459be926a08944e6c9ea50312a63a5f', 'width': 108}, {'height': 113, 'url': 'h... |
Are LLMs, particularly the local open-source models, capable of having their own opinions and preferences without them being programmed ones | 0 | I have been curious about this so, I wanted to know what the community thought. Do you all have any evidence to back it up one way or the other? If it depends on the model or the model size in parameters, how much is necessary? I wonder since, I've seen some "system prompts", (like one that is supposedly Meta AI's syst... | 2025-07-25T23:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m9e5hw/are_llms_particularly_the_local_opensource_models/ | theshadowraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9e5hw | false | null | t3_1m9e5hw | /r/LocalLLaMA/comments/1m9e5hw/are_llms_particularly_the_local_opensource_models/ | false | false | self | 0 | null |
Laptop advise for lightweight AI work | 2 | Given: 14-inch MacBook Pro (M4 Pro, 48GB unified memory, 1TB SSD)
What kind of local LLMs can I run?
What’s your experience?
Can I run mistral, Gemma, phi, or models 7b or 13b, etc. params?
Thanks! | 2025-07-25T23:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m9e2s9/laptop_advise_for_lightweight_ai_work/ | entered_apprentice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9e2s9 | false | null | t3_1m9e2s9 | /r/LocalLLaMA/comments/1m9e2s9/laptop_advise_for_lightweight_ai_work/ | false | false | self | 2 | null |
Best models to fine-tune? | 2 | There's so many models, which one to train?
Does it depend on the kind of output I need like text or code or format / structure?
And how long does training take on what hardware?
5060 ti, A100, 5090, any information.
Thank you | 2025-07-25T23:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m9dysd/best_models_to_finetune/ | zekuden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9dysd | false | null | t3_1m9dysd | /r/LocalLLaMA/comments/1m9dysd/best_models_to_finetune/ | false | false | self | 2 | null |
Ai training Tool I want to share! | 1 | I’ve been working on a small tool to make it easier to extract high-quality transcripts from YouTube videos. I think it will be useful for AI trainers and dataset builders who want to build language datasets from online content.
So I will be giving away a beta tester account that wil have infinite credits until launc... | 2025-07-25T23:09:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m9duv2/ai_training_tool_i_want_to_share/ | Enough_Patient1904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9duv2 | false | null | t3_1m9duv2 | /r/LocalLLaMA/comments/1m9duv2/ai_training_tool_i_want_to_share/ | false | false | self | 1 | null |
Dissatisfied with how the RTX PRO 6000 Blackwell is performing during AI inference | 0 | I was contemplating buying an RTX PRO 6000 Blackwell, but after conducting some research on [YouTube](https://youtu.be/bAao58hXo9w?si=F1vCh4gSJxrgYqo2&t=832), I was disappointed with its performance. The prompt processing speed didn't meet my expectations, and token generation decreased notably when context was added. ... | 2025-07-25T23:09:39 | https://www.reddit.com/r/LocalLLaMA/comments/1m9dur7/dissatisfied_with_how_the_rtx_pro_6000_blackwell/ | d00m_sayer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9dur7 | false | null | t3_1m9dur7 | /r/LocalLLaMA/comments/1m9dur7/dissatisfied_with_how_the_rtx_pro_6000_blackwell/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Bg367mTzk4i869SXNHiikTP1RjxNeVibkNvwuMAACGo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Bg367mTzk4i869SXNHiikTP1RjxNeVibkNvwuMAACGo.jpeg?width=108&crop=smart&auto=webp&s=d8af79e10d322a89c0acdc8c1e95b10d83feae63', 'width': 108}, {'height': 162, 'url': '... |
IQ4_KSS 114 GiB and more ik_llama.cpp exclusive quants! | 44 | Just finished uploading and perplexity testing some new ik\_llama.cpp quants. Despite the random github takedown (and subsequent restoring) ik\_llama.cpp is going strong!
ik just refreshed the IQ4\_KSS 4.0 bpw non-linear quantization for faster performance and great perplexity so this quant hits a sweet spot at \~114G... | 2025-07-25T22:19:23 | https://huggingface.co/ubergarm/Qwen3-235B-A22B-Thinking-2507-GGUF | VoidAlchemy | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m9cp2n | false | null | t3_1m9cp2n | /r/LocalLLaMA/comments/1m9cp2n/iq4_kss_114_gib_and_more_ik_llamacpp_exclusive/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': 'MiPkIenpbCsl-SvscocvypyhekEQtz60LZSjLkl-I3E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MiPkIenpbCsl-SvscocvypyhekEQtz60LZSjLkl-I3E.png?width=108&crop=smart&auto=webp&s=a69f6d9c0fc9888f89ac3d1e39ffbc13bdff2b89', 'width': 108}, {'height': 116, 'url': 'h... |
Multi GPU multi server inference | 4 | Was thinking how to scale a GPU cluster. Not talking about CPUs here.
Usually have heard that "buy Epyc" and add 6-8 GPUs in it. but thats it then, it wont scale more.
But now that I have learned how to use vLLM, and it can utilize multi GPU and also multi server GPUs, was thinking what if creating a cluster with f... | 2025-07-25T22:08:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m9cg16/multi_gpu_multi_server_inference/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9cg16 | false | null | t3_1m9cg16 | /r/LocalLLaMA/comments/1m9cg16/multi_gpu_multi_server_inference/ | false | false | self | 4 | null |
Has anyone found a seamless, low-latency solution for real-time audio conversations with a local LLM? | 5 | I've been following the progress of local LLMs for a while and I'm really interested in setting up a system for a natural, real-time audio conversation. I've seen some posts here discussing solutions that involve piping together speech-to-text, the LLM, and text-to-speech.
I'm curious to know if anyone has found or bu... | 2025-07-25T22:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m9c9fh/has_anyone_found_a_seamless_lowlatency_solution/ | Far_Buyer_7281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9c9fh | false | null | t3_1m9c9fh | /r/LocalLLaMA/comments/1m9c9fh/has_anyone_found_a_seamless_lowlatency_solution/ | false | false | self | 5 | null |
Need Help, llama.cpp cutting off responses. | 1 | [removed] | 2025-07-25T21:52:02 | Worth_Ad9031 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9c1fb | false | null | t3_1m9c1fb | /r/LocalLLaMA/comments/1m9c1fb/need_help_llamacpp_cutting_off_responses/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'c28inxhmc3ff1', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/c28inxhmc3ff1.png?width=108&crop=smart&auto=webp&s=b09a2ab403e097b2cbea97da1a6e78adef38e08c', 'width': 108}, {'height': 265, 'url': 'https://preview.redd.it/c28inxhmc3ff1.png?width=216&crop=smart&auto=we... | |
Compact 2x RTX Pro 6000 Rig | 171 | Finally put together my rig after months of planning into a NAS case
* Threadripper PRO 7955WX
* Arctic Freezer 4U-M (cpu cooler)
* Gigabyte TRX50 AI TOP
* be quiet! Dark Power Pro 13 1600W
* JONSBO N5 Case
* 2x RTX Pro 6000
Might add a few more intake fans on the top | 2025-07-25T21:46:33 | shadowninjaz3 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9bwoy | false | null | t3_1m9bwoy | /r/LocalLLaMA/comments/1m9bwoy/compact_2x_rtx_pro_6000_rig/ | false | false | 171 | {'enabled': True, 'images': [{'id': '75hqb9t8X6IAo2hcG2QxH9eY348BDsA6al0LQJShjQE', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/tbteu4v5b3ff1.jpeg?width=108&crop=smart&auto=webp&s=cb8530b9905fee9ce384e441b1786c16863f92ac', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/tbteu4v5b3ff1.jp... | ||
Who should we ask for funding? | 1 | me and my friend have been working on an architecture for a bit that doesnt use attention, but due to limited hardware progress has been slow, what companies or ppl should we reach out to? we arent looking for much maybe a 1000 dollars and would be glad to make a contract with someone for publishing rights of the LLM i... | 2025-07-25T21:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m9boeu/who_should_we_ask_for_funding/ | Commercial-Ad-1148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9boeu | false | null | t3_1m9boeu | /r/LocalLLaMA/comments/1m9boeu/who_should_we_ask_for_funding/ | false | false | self | 1 | null |
Is /load <model> all you need in order to run the specific model you installed? | 0 | After starting Ollama and doing the ollama run <model> how do you know if it’s running that specific model or if it’s still using the default that comes with ollama? Do you just need the run code for it to work, the load command, or both? | 2025-07-25T21:35:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m9bmqm/is_load_model_all_you_need_in_order_to_run_the/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9bmqm | false | null | t3_1m9bmqm | /r/LocalLLaMA/comments/1m9bmqm/is_load_model_all_you_need_in_order_to_run_the/ | false | false | self | 0 | null |
The new Kimi vs. new qwen3 for coding | 3 | Anyone run the q4ks versions of these, which one is winning for code generation... Too early for consensus yet? Thx | 2025-07-25T21:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m9avmv/the_new_kimi_vs_new_qwen3_for_coding/ | Agreeable-Prompt-666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9avmv | false | null | t3_1m9avmv | /r/LocalLLaMA/comments/1m9avmv/the_new_kimi_vs_new_qwen3_for_coding/ | false | false | self | 3 | null |
How to get started | 0 | I’m looking to get started at self hosting an LLM but have no experience with this.
What I am looking for is:
An LLM that I can explore with code, ideally if I can link it in with some folders on my MacBook Pro M4, and then also on a server, the servers will be getting GPUs mounted soon.
I ideally want to be able ... | 2025-07-25T20:55:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m9antc/how_to_get_started/ | theonethatownz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9antc | false | null | t3_1m9antc | /r/LocalLLaMA/comments/1m9antc/how_to_get_started/ | false | false | self | 0 | null |
Any Rpers test the new qwen 2507 yet? | 16 | Curious how the two new thinking/non thinking stack up vs deepseek. | 2025-07-25T20:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m9ajf9/any_rpers_test_the_new_qwen_2507_yet/ | Antique_Bit_1049 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9ajf9 | false | null | t3_1m9ajf9 | /r/LocalLLaMA/comments/1m9ajf9/any_rpers_test_the_new_qwen_2507_yet/ | false | false | self | 16 | null |
Je cherche un modèle text-to-text | 0 | Bonjour,
Je cherche un modèle **13B**, au format **GGUF**, qui soit :
* **non censuré** (pas de "safety filters", pas de refus de sujet),
* avec une **très bonne maîtrise du français** (langue native ou quasi),
* compatible avec **Text Generation WebUI** (j'utilise llama.cpp),
* et qui tourne bien sur mon **GPU RTX 4... | 2025-07-25T20:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m9a554/je_cherche_un_modèle_texttotext/ | Kball76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9a554 | false | null | t3_1m9a554 | /r/LocalLLaMA/comments/1m9a554/je_cherche_un_modèle_texttotext/ | false | false | nsfw | 0 | null |
Any AI tool for application creation (not website builders)? | 3 | In the market right now, there’s an ocean of no‑code and low‑code platforms shouting about how they “let you build anything.”
But let’s be real, most of them are just website builders with a fancier skin.
I’ve used tools like Lovable, Bolt, Fire Studio.
They are simple, but they still feel like the low‑end spectrum... | 2025-07-25T20:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m99xty/any_ai_tool_for_application_creation_not_website/ | Delicious_Track6230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m99xty | false | null | t3_1m99xty | /r/LocalLLaMA/comments/1m99xty/any_ai_tool_for_application_creation_not_website/ | false | false | self | 3 | null |
What is the best AI to run locally and use in agent mode of the Continue extension in VS Code? | 2 | My config:
Ryzen 5 5500, 16Gb, RTX 3060 12Gb
| 2025-07-25T20:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m99hwb/what_is_the_best_ai_to_run_locally_and_use_in/ | Ikelven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m99hwb | false | null | t3_1m99hwb | /r/LocalLLaMA/comments/1m99hwb/what_is_the_best_ai_to_run_locally_and_use_in/ | false | false | self | 2 | null |
Is there any way to run Phi-4-mini-flash-reasoning on Ollama? | 0 | Phi-4-mini-flash-reasoning isn't in the Ollama repository, and in huggingface there are .safetensors files, as the architecture of this new model is called SambaY (some Mamba variant) this may complicate things with regard to converting it to GGUF or some other format, I would like to run the model with no modification... | 2025-07-25T20:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m99ac7/is_there_any_way_to_run_phi4miniflashreasoning_on/ | WowSkaro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m99ac7 | false | null | t3_1m99ac7 | /r/LocalLLaMA/comments/1m99ac7/is_there_any_way_to_run_phi4miniflashreasoning_on/ | false | false | self | 0 | null |
Meta AI on WhatsApp hides a system prompt | 1,148 | While using Meta AI on WhatsApp, I noticed it starts with a hidden system prompt. It’s not visible in the chat, and if you ask it to repeat the first message or what you said, it denies anything exists.
After some attempts, I managed to get it to reveal the hidden prompt:
>You are an expert conversationalist made by ... | 2025-07-25T19:30:58 | https://www.reddit.com/gallery/1m98jl8 | ALE5SI0 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m98jl8 | false | null | t3_1m98jl8 | /r/LocalLLaMA/comments/1m98jl8/meta_ai_on_whatsapp_hides_a_system_prompt/ | false | false | 1,148 | null | |
How to convert Kimi K2 FP8 to BF16? | 0 | I downloaded the original FP8 version because I wanted to experiment with different quants and compare them, and also use my own imatrix for the best results for my use cases. For DeepSeek V3 and R1 this approach works very well, I can make use of imatrix data of my choice and select quantization parameters that I pref... | 2025-07-25T18:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m97qko/how_to_convert_kimi_k2_fp8_to_bf16/ | Lissanro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m97qko | false | null | t3_1m97qko | /r/LocalLLaMA/comments/1m97qko/how_to_convert_kimi_k2_fp8_to_bf16/ | false | false | self | 0 | null |
How to convert Kimi K2 FP8 to BF16? | 1 | [removed] | 2025-07-25T18:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m97n1d/how_to_convert_kimi_k2_fp8_to_bf16/ | Lissanro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m97n1d | false | null | t3_1m97n1d | /r/LocalLLaMA/comments/1m97n1d/how_to_convert_kimi_k2_fp8_to_bf16/ | false | false | self | 1 | null |
How to convert Kimi K2 FP8 to BF16 | 1 | [removed] | 2025-07-25T18:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1m97ijr/how_to_convert_kimi_k2_fp8_to_bf16/ | Lissanro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m97ijr | false | null | t3_1m97ijr | /r/LocalLLaMA/comments/1m97ijr/how_to_convert_kimi_k2_fp8_to_bf16/ | false | false | self | 1 | null |
Mi50 array for training LLMs | 2 | Ive been looking at buying a few mi50 32gb cards for my local training setup because they are absurdly affordable for the VRAM they have. I'm not too concerned with FLOP/s performance, as long as they have compatibility with a relatively modern pytorch and its dependencies.
I've seen people on here talking about this ... | 2025-07-25T18:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m96wrc/mi50_array_for_training_llms/ | Used_Algae_1077 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m96wrc | false | null | t3_1m96wrc | /r/LocalLLaMA/comments/1m96wrc/mi50_array_for_training_llms/ | false | false | self | 2 | null |
How does LibreChat handle translations and how can I update all language files after changing base messages? | 2 | Hi everyone,
I'm working on a project using LibreChat, and I've noticed that it handles translations through `.ts` and `.md` files—one set per language. Each file contains over a thousand lines, so I assume these aren't written manually. There must be some kind of script or automation behind generating them.
I want ... | 2025-07-25T18:15:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m96m6w/how_does_librechat_handle_translations_and_how/ | suribe06 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m96m6w | false | null | t3_1m96m6w | /r/LocalLLaMA/comments/1m96m6w/how_does_librechat_handle_translations_and_how/ | false | false | self | 2 | null |
Does it ever make sense to train for 10 epochs? Or did i do it all wrong? | 15 | I've been trying a lot of different combinations with static learning rates, and i have to set up the test inference for every single epoch to determine the sweet spot because i doubt that any automation that does not involve running two simultaneous llm will be able to accurate tell when the results are desirable. But... | 2025-07-25T18:03:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m96b4h/does_it_ever_make_sense_to_train_for_10_epochs_or/ | BulkyPlay7704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m96b4h | false | null | t3_1m96b4h | /r/LocalLLaMA/comments/1m96b4h/does_it_ever_make_sense_to_train_for_10_epochs_or/ | false | false | self | 15 | null |
Conversational LLM | 1 | I'm trying think of a conversational LLM
Which won't hallucinate when the context (conversation history) grows.
Llm should also hold personalities.
Any help us appropriated.
| 2025-07-25T18:00:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m968q4/conversational_llm/ | backofthemind99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m968q4 | false | null | t3_1m968q4 | /r/LocalLLaMA/comments/1m968q4/conversational_llm/ | false | false | self | 1 | null |
We built SnapThink — an offline AI notebook that runs LLMs, Python, and robot simulations on your laptop | 2 | [removed] | 2025-07-25T17:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m960et/we_built_snapthink_an_offline_ai_notebook_that/ | TypicalPudding6190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m960et | false | null | t3_1m960et | /r/LocalLLaMA/comments/1m960et/we_built_snapthink_an_offline_ai_notebook_that/ | false | false | self | 2 | null |
AMD equivalent for NVIDIA RTX 6000 PRO Blackwell | 5 | Is AMD working on any GPU which will compete with RTX 6000 PRO Blackwell in memory, compute, and price? Or one with higher VRAM but targeted at workstations? | 2025-07-25T17:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m95wcg/amd_equivalent_for_nvidia_rtx_6000_pro_blackwell/ | s-s-a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m95wcg | false | null | t3_1m95wcg | /r/LocalLLaMA/comments/1m95wcg/amd_equivalent_for_nvidia_rtx_6000_pro_blackwell/ | false | false | self | 5 | null |
What a Real MCP Inspector Exploit Taught Us About Trust Boundaries | 2 | 2025-07-25T17:43:36 | https://glama.ai/blog/2025-07-25-keeping-mcp-inspector-safe-lessons-from-cve-2025-49596 | No-Abies7108 | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1m95sdj | false | null | t3_1m95sdj | /r/LocalLLaMA/comments/1m95sdj/what_a_real_mcp_inspector_exploit_taught_us_about/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'FySv24u1eil3dRo2Y1Z4jOxhVWhZAsG3ATu9GRHaX4U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FySv24u1eil3dRo2Y1Z4jOxhVWhZAsG3ATu9GRHaX4U.png?width=108&crop=smart&auto=webp&s=867c2e3c9c184ac0e64e1d2b897ce35fafe59ad3', 'width': 108}, {'height': 113, 'url': 'h... | |
MassGen – an open-source multi-agent scaling and orchestration framework | 3 | MassGen — an open-source multi-agent orchestration framework just launched. Supports cross-model collaboration (Grok, OpenAI, Claude, Gemini) with real-time streaming and consensus-building among agents. Inspired by "parallel study groups" and Grok Heavy.
[https://x.com/Chi\_Wang\_/status/1948790995694617036](https:/... | 2025-07-25T17:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1m95lud/massgen_an_opensource_multiagent_scaling_and/ | LifeUnderstanding732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m95lud | false | null | t3_1m95lud | /r/LocalLLaMA/comments/1m95lud/massgen_an_opensource_multiagent_scaling_and/ | false | false | self | 3 | null |
Anyone had any luck with Google's Gemma 3n model? | 4 | Google released their Gemma 3n model about a month ago, and they've mentioned that it's meant to run efficiently on everyday devices, yet, from my experience it runs really slow on my Mac (base model M2 Mac mini from 2023 with only 8GB of RAM). I am aware that my small amount of RAM is very limiting in the space of loc... | 2025-07-25T17:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m95bfq/anyone_had_any_luck_with_googles_gemma_3n_model/ | Junior-Ad-2186 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m95bfq | false | null | t3_1m95bfq | /r/LocalLLaMA/comments/1m95bfq/anyone_had_any_luck_with_googles_gemma_3n_model/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
Is AI dialogue the future of gaming? | 8 | 2025-07-25T17:16:56 | https://v.redd.it/kwadvy7vz1ff1 | LandoRingel | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m9535b | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kwadvy7vz1ff1/DASHPlaylist.mpd?a=1756055830%2CMzQ2N2U2ZDFmZDAyZDE1YTY1MmY1ZWIwMjQwMzM4NzhkYTQ5MjhiNzJlYmFkZTg1NmM4MWI5YmFlYTc2NjgyOA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/kwadvy7vz1ff1/DASH_1080.mp4?source=fallback', 'h... | t3_1m9535b | /r/LocalLLaMA/comments/1m9535b/is_ai_dialogue_the_future_of_gaming/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'bTlkc3N3N3Z6MWZmMV4R7_wqTqa4S-Em63VZNWlYLKqHqatiW2ePCfOcZ7Ue', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTlkc3N3N3Z6MWZmMV4R7_wqTqa4S-Em63VZNWlYLKqHqatiW2ePCfOcZ7Ue.png?width=108&crop=smart&format=pjpg&auto=webp&s=425c4520d088cf30251cb1992b126dedf6800... | ||
New UI for uploading and managing custom models (Figma mockups) | 16 | Been working on a cleaner UI for uploading and managing custom models — here are some early Figma drafts of the connection flow and model details page. Still a work in progress, but I’d love to hear your thoughts!
For those who are new here: I’m building this platform as a solo pet project in my free time, and I’v... | 2025-07-25T17:07:49 | https://www.reddit.com/gallery/1m94uea | RIPT1D3_Z | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m94uea | false | null | t3_1m94uea | /r/LocalLLaMA/comments/1m94uea/new_ui_for_uploading_and_managing_custom_models/ | false | false | 16 | null | |
Data shows public AI repos may be quietly becoming a supply chain risk | 0 | 2025-07-25T17:04:32 | https://blog.ramalama.com/data-shows-public-ai-repos-may-be-quietly-becoming-a-supply-chain-risk/ | ProfessionalHorse707 | blog.ramalama.com | 1970-01-01T00:00:00 | 0 | {} | 1m94r8i | false | null | t3_1m94r8i | /r/LocalLLaMA/comments/1m94r8i/data_shows_public_ai_repos_may_be_quietly/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'vqZUhGSzUZ1yMtf3yHj4ZJeonTZAkxlOGX5jxZNB2wI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vqZUhGSzUZ1yMtf3yHj4ZJeonTZAkxlOGX5jxZNB2wI.png?width=108&crop=smart&auto=webp&s=a1109f30db34ec61d476e4368773e74d29aadfb4', 'width': 108}, {'height': 216, 'url': '... | |
Hunyuan (Ex-WizardLM) Dense Model Coming Soon! | 89 | 2025-07-25T16:59:10 | https://github.com/ggml-org/llama.cpp/pull/14878 | TKGaming_11 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m94ls2 | false | null | t3_1m94ls2 | /r/LocalLLaMA/comments/1m94ls2/hunyuan_exwizardlm_dense_model_coming_soon/ | false | false | default | 89 | {'enabled': False, 'images': [{'id': '9GvEX7LrasZYgCzEEJMA4dtKp0bfjGuzNOUm65ANRbI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bNc9o_V141ol7DNGS9rNwjmeP7fGaxe_CzkVBPbu-ec.jpg?width=108&crop=smart&auto=webp&s=f1baecdc2dfde1050ccffe3b9db6f4cad84a6a26', 'width': 108}, {'height': 108, 'url': 'h... | |
InternLM S1 Coming Soon! | 25 | 2025-07-25T16:58:05 | https://github.com/ggml-org/llama.cpp/pull/14875 | TKGaming_11 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m94kqu | false | null | t3_1m94kqu | /r/LocalLLaMA/comments/1m94kqu/internlm_s1_coming_soon/ | false | false | 25 | {'enabled': False, 'images': [{'id': '68cqK-IIJ_iGWFIsXitEW7LUCmC67Kl-rAhspI3AU1c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/68cqK-IIJ_iGWFIsXitEW7LUCmC67Kl-rAhspI3AU1c.png?width=108&crop=smart&auto=webp&s=86cbc6625abbc12b1404c26848130ec7f4107ee3', 'width': 108}, {'height': 108, 'url': 'h... | ||
Would you use this? Desktop app for auto-benchmarking GGUF/ONNX models locally | 4 | I'm thinking of building a desktop app that helps you:
\- Detect your hardware (GPU, RAM, CPU)
\- Benchmark local AI models (GGUF/ONNX) automatically
\- Tell you which quant config runs best (Q4, Q5, etc.)
\- Show ratings like "This model is great for coding, 12 tok/s on 8GB RAM"
\- Launch models directly in one... | 2025-07-25T16:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m93u0b/would_you_use_this_desktop_app_for/ | Conscious-Drive-1448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m93u0b | false | null | t3_1m93u0b | /r/LocalLLaMA/comments/1m93u0b/would_you_use_this_desktop_app_for/ | false | false | self | 4 | null |
Do you need Agno/Langchain/LangGraph with models with agentic capabilities? | 1 | I am a noob whose just beginning to fiddle around with models. Was testing out qwen 3 and trying to build an application using it + 2 tools (a web search function using tavily and a financial data retriever using yfinance). I ran into more bugs running an agno framework vs just commanding the system prompt to call the ... | 2025-07-25T16:20:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m93lcs/do_you_need_agnolangchainlanggraph_with_models/ | demisincos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m93lcs | false | null | t3_1m93lcs | /r/LocalLLaMA/comments/1m93lcs/do_you_need_agnolangchainlanggraph_with_models/ | false | false | self | 1 | null |
New Qwen3 on Fiction.liveBench | 94 | 2025-07-25T16:11:47 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m93d0r | false | null | t3_1m93d0r | /r/LocalLLaMA/comments/1m93d0r/new_qwen3_on_fictionlivebench/ | false | false | default | 94 | {'enabled': True, 'images': [{'id': 'hvi3tvmjo1ff1', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/hvi3tvmjo1ff1.png?width=108&crop=smart&auto=webp&s=385e269d9eecbdde591e638694025b90d6447acc', 'width': 108}, {'height': 319, 'url': 'https://preview.redd.it/hvi3tvmjo1ff1.png?width=216&crop=smart&auto=we... | ||
New Qwen3 on Fiction.liveBench | 1 | [deleted] | 2025-07-25T16:11:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m93cfe | false | null | t3_1m93cfe | /r/LocalLLaMA/comments/1m93cfe/new_qwen3_on_fictionlivebench/ | false | false | default | 1 | null | ||
Long Context in Qwen LLM: Then vs Now | 1 | [removed] | 2025-07-25T15:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/1m930h8/long_context_in_qwen_llm_then_vs_now/ | Electrical_Gas_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m930h8 | false | null | t3_1m930h8 | /r/LocalLLaMA/comments/1m930h8/long_context_in_qwen_llm_then_vs_now/ | false | false | self | 1 | null |
GPU Suggestions | 4 | Hey all, looking for a discussion on GPU options for LLM self hosting. Looking for something 24GB that doesn’t break the bank. Bonus if it’s single slot as I have no room in the server I’m working with.
Obviously there’s a desire to run the biggest model possible but there’s plenty of tradeoffs here and of course usi... | 2025-07-25T15:53:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m92vqp/gpu_suggestions/ | Grimm_Spector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m92vqp | false | null | t3_1m92vqp | /r/LocalLLaMA/comments/1m92vqp/gpu_suggestions/ | false | false | self | 4 | null |
Gpu just for prompt processing? | 2 | Can I make a ram based server hardware llm machine, something like a Xeon or epic with 12 channel ram.
But since I am worried about cpu prompt processing speed, can I add a gpu like a 4070, good gpu chip, kinda shit amount of vram, can I add something like that to handle the prompt processing, while leveraging the ra... | 2025-07-25T15:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m92di7/gpu_just_for_prompt_processing/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m92di7 | false | null | t3_1m92di7 | /r/LocalLLaMA/comments/1m92di7/gpu_just_for_prompt_processing/ | false | false | self | 2 | null |
Docker Compose vLLM Config | 1 | Does anyone have any Docker Compose examples for vLLM?
I am in the fortunate position of having 8 (!) H200s in a single server in the near future.
I want DeepSeek in the 671B variant with openwebui.
It would be great if someone had a Compose file that would allow me to use all GPUs in parallel. | 2025-07-25T15:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m9281b/docker_compose_vllm_config/ | crossijinn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m9281b | false | null | t3_1m9281b | /r/LocalLLaMA/comments/1m9281b/docker_compose_vllm_config/ | false | false | self | 1 | null |
Email API for AI Agents | 0 | Hey unicorns (and future unicorns)!
I’ve got nothing to sell you, but we’re opening up a sponsorship program at Lemon Email that I thought you’d be interested in.
If you’re building or vibe coding email-first or any email-related AI agents, we’re sponsoring 10 founders this month with up to 100,000 email credits each... | 2025-07-25T15:20:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m91zzn/email_api_for_ai_agents/ | NormanSzobotka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m91zzn | false | null | t3_1m91zzn | /r/LocalLLaMA/comments/1m91zzn/email_api_for_ai_agents/ | false | false | self | 0 | null |
[Updated] AI assistant Chrome extension has tools and RAG | 3 | >[Cognito: Your AI Sidekick for Chrome. A MIT licensed very lightweight Web UI with multitools.](https://www.reddit.com/r/LocalLLaMA/comments/1kwhw20/cognito_your_ai_sidekick_for_chrome_a_mit/)
by[u/Asleep-Ratio7535](https://www.reddit.com/user/Asleep-Ratio7535/) in[LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/)
... | 2025-07-25T15:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m91u38/updated_ai_assistant_chrome_extension_has_tools/ | Asleep-Ratio7535 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m91u38 | false | null | t3_1m91u38 | /r/LocalLLaMA/comments/1m91u38/updated_ai_assistant_chrome_extension_has_tools/ | false | false | 3 | null | |
🚀 Built a Multi-Agent System in 6 Hours That Solves 5/6 IMO 2025 Math Problems - Inspired by Recent Research Breakthroughs | 30 | Hey\~
Exciting news in the AI reasoning space! We just built a Multi-Agent System (MAS) in 6 hours that successfully solved 5 out of 6 IMO 2025 math problems! 🎯
# Research Context:
This work was inspired by the recent breakthrough paper "Gemini 2.5 Pro Capable of Winning Gold at IMO 2025" (Huang & Yang, 2025). The ... | 2025-07-25T15:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/1m91mt6/built_a_multiagent_system_in_6_hours_that_solves/ | Vivid_Might1225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m91mt6 | false | null | t3_1m91mt6 | /r/LocalLLaMA/comments/1m91mt6/built_a_multiagent_system_in_6_hours_that_solves/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': '7foZuO3iYId8cgR6p4m9GCPTzEL9721s_1XsNhv_Un8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7foZuO3iYId8cgR6p4m9GCPTzEL9721s_1XsNhv_Un8.png?width=108&crop=smart&auto=webp&s=fb99394a2af815f3715108c671ceba9c647d5f86', 'width': 108}, {'height': 108, 'url': 'h... |
A Perspective on DeepSeek and "Whataboutism" from a Mainlander | 1 | [removed] | 2025-07-25T15:03:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m91jtj/a_perspective_on_deepseek_and_whataboutism_from_a/ | Sanitizer8819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m91jtj | false | null | t3_1m91jtj | /r/LocalLLaMA/comments/1m91jtj/a_perspective_on_deepseek_and_whataboutism_from_a/ | false | false | 1 | null | |
A Perspective on DeepSeek and "Whataboutism" from a Mainlander | 1 | [removed] | 2025-07-25T14:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m91dp9/a_perspective_on_deepseek_and_whataboutism_from_a/ | Sanitizer8819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m91dp9 | false | null | t3_1m91dp9 | /r/LocalLLaMA/comments/1m91dp9/a_perspective_on_deepseek_and_whataboutism_from_a/ | false | false | 1 | null | |
New to local AI | 3 | Hey all. As the title says, I'm new to hosting AI locally. I am using an Nvidia RTX 4080 16GB. I got Ollama installed and llama2 running, but it is pretty lackluster. Seeing that I can run llama3 which is supposed to be much better. Any tips from experienced users? I am just doing this as something to tinker with. TIA.... | 2025-07-25T14:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m91dmh/new_to_local_ai/ | m_spoon09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m91dmh | false | null | t3_1m91dmh | /r/LocalLLaMA/comments/1m91dmh/new_to_local_ai/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.