title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Would you use a plug-and-play dev stack for building local AI apps?
0
I’m exploring a local-first toolkit for devs to build AI apps. No cloud, no APIs, no LangChain mess. Think: Ollama + Chroma + Streamlit, prewired so you can drop in docs and start chatting. Curious if this solves a real pain. Have you tried building local AI apps? What sucked? Would love thoughts, feedback, or coll...
2025-07-06T18:15:47
https://www.reddit.com/r/LocalLLaMA/comments/1lt7zx1/would_you_use_a_plugandplay_dev_stack_for/
Historical_Earth9807
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt7zx1
false
null
t3_1lt7zx1
/r/LocalLLaMA/comments/1lt7zx1/would_you_use_a_plugandplay_dev_stack_for/
false
false
self
0
null
Best practice for domain-specific LLM?
1
Hi everyone! I'm a high school Economics teacher, and I am highly interested in using AI to improve teaching quality. Looking to create an AI tutor to help my students prepare for exams. I want it to be accurate and focused on Economics topics (and also in line with the syllabus). I've done some research and just start...
2025-07-06T18:15:24
https://www.reddit.com/r/LocalLLaMA/comments/1lt7zl8/best_practice_for_domainspecific_llm/
Unfair-Run-967
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt7zl8
false
null
t3_1lt7zl8
/r/LocalLLaMA/comments/1lt7zl8/best_practice_for_domainspecific_llm/
false
false
self
1
null
Nvidia RTX 5060 Ti 16GB for local LLM inference with Olllama + Open WebUI
26
Hello! Like many here, I am super excited to locally run open source LLMs using Open WebUI, LMStudio etc., and figured that a RTX 5060 Ti would be a good budget starting point. So I got it with a cheap gaming PC a few days ago. Its whole purpose for me at the moment is to learn how to configure everything (using Ollama...
2025-07-06T17:46:10
https://www.reddit.com/r/LocalLLaMA/comments/1lt79jg/nvidia_rtx_5060_ti_16gb_for_local_llm_inference/
Philhippos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt79jg
false
null
t3_1lt79jg
/r/LocalLLaMA/comments/1lt79jg/nvidia_rtx_5060_ti_16gb_for_local_llm_inference/
false
false
self
26
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h...
I made Otacon into a desktop buddy. He comments on your active application and generally keeps you company. (X-Post /r/metalgear)
9
2025-07-06T17:21:00
https://old.reddit.com/r/metalgear/comments/1lt6m6d/i_made_otacon_into_a_desktop_buddy_he_comments_on/
otac0n
old.reddit.com
1970-01-01T00:00:00
0
{}
1lt6o4d
false
null
t3_1lt6o4d
/r/LocalLLaMA/comments/1lt6o4d/i_made_otacon_into_a_desktop_buddy_he_comments_on/
false
false
default
9
{'enabled': False, 'images': [{'id': 'djV1aGZnNmpkYWJmMUok61catiCF0Lfr2I9TZHYyHTf-hgXFRIrOzO_DutsN', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/djV1aGZnNmpkYWJmMUok61catiCF0Lfr2I9TZHYyHTf-hgXFRIrOzO_DutsN.png?width=108&crop=smart&auto=webp&s=90f95392ff03c50da9bbf1565f953ad182e22ad2', 'width...
Creating local/private LLM
0
[removed]
2025-07-06T17:18:39
https://www.reddit.com/r/LocalLLaMA/comments/1lt6m4o/creating_localprivate_llm/
Far_Reference9747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt6m4o
false
null
t3_1lt6m4o
/r/LocalLLaMA/comments/1lt6m4o/creating_localprivate_llm/
false
false
self
0
null
Self-hosted AI coding that just works
566
***TLDR***: VSCode + RooCode + LM Studio + Devstral + Ollama + snowflake-arctic-embed2 + docs-mcp-server. A fast, cost-free, self-hosted AI coding assistant setup supports lesser-used languages and minimizes hallucinations on less powerful hardware. **Long Post:** Hello everyone, sharing my findings on trying to fi...
2025-07-06T16:09:28
https://www.reddit.com/r/LocalLLaMA/comments/1lt4y1z/selfhosted_ai_coding_that_just_works/
send_me_a_ticket
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt4y1z
false
null
t3_1lt4y1z
/r/LocalLLaMA/comments/1lt4y1z/selfhosted_ai_coding_that_just_works/
false
false
self
566
{'enabled': False, 'images': [{'id': 'vlZpegiOP81x_r16_rT4LmaQbBjxreSVqZS08MiyODY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vlZpegiOP81x_r16_rT4LmaQbBjxreSVqZS08MiyODY.png?width=108&crop=smart&auto=webp&s=e20521ee56cbc2d4a70339418c28f6b4802a2591', 'width': 108}, {'height': 113, 'url': 'h...
Streaming or non streamed responses, assuming the same (and reasonably fast) time to final token
0
Feel free to comment with your specific use case and how this affects it. For ex. I’m making an ai editor for something, and I prefer non streamed responses. [View Poll](https://www.reddit.com/poll/1lt4994)
2025-07-06T15:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1lt4994/streaming_or_non_streamed_responses_assuming_the/
numinouslymusing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt4994
false
null
t3_1lt4994
/r/LocalLLaMA/comments/1lt4994/streaming_or_non_streamed_responses_assuming_the/
false
false
self
0
null
Customizable Local LLM with MetaHumans, coming out July 31
1
[removed]
2025-07-06T15:13:50
https://discord.gg/GwJapqst5v
ChrisZavadil
discord.gg
1970-01-01T00:00:00
0
{}
1lt3m1i
false
null
t3_1lt3m1i
/r/LocalLLaMA/comments/1lt3m1i/customizable_local_llm_with_metahumans_coming_out/
false
false
default
1
null
Thought traces - how do they work?
1
I've been examining thought traces (in this example from 2.5 pro in Cursor), and they often say things like "I'm reviewing the code.... Now that I've reviewed the code, I find some discrepancies... etc" But it all comes out in 1 stream of conciousness with no evidence that the code review actually happened. Often, it...
2025-07-06T15:08:29
https://www.reddit.com/r/LocalLLaMA/comments/1lt3hl5/thought_traces_how_do_they_work/
LeatherRub7248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt3hl5
false
null
t3_1lt3hl5
/r/LocalLLaMA/comments/1lt3hl5/thought_traces_how_do_they_work/
false
false
self
1
null
Zhipu (company behind GLM) secures $1.4 billion strategic investment from Shanghai state funds
116
2025-07-06T14:09:23
https://technode.com/2025/07/04/zhipu-secures-1-4-billion-strategic-investment-from-shanghai-state-funds/
cpldcpu
technode.com
1970-01-01T00:00:00
0
{}
1lt254p
false
null
t3_1lt254p
/r/LocalLLaMA/comments/1lt254p/zhipu_company_behind_glm_secures_14_billion/
false
false
default
116
{'enabled': False, 'images': [{'id': 'TUcdBWYS71oGoBa_L8NsLNVv7XxT4A7gDIlhLZYekVM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/TUcdBWYS71oGoBa_L8NsLNVv7XxT4A7gDIlhLZYekVM.png?width=108&crop=smart&auto=webp&s=a3b215b23efad990c915c0690b29335f1a80dbc5', 'width': 108}, {'height': 113, 'url': 'h...
How do I see my tokens per second speed? I'm using llama.cpp / ik_llama.cpp with OpenWebUI
1
New here, I'm running ik_llama.cpp as the backend, OpenWebUI on the front end. OWUI is showing the tokens generated, total tokens, etc, but is NOT showing token speed like with llama. I tried the --verbose argument with running llama-server, but token speed usage is still not showing in OWUI. Any ideas? Thanks in adv...
2025-07-06T14:01:40
https://www.reddit.com/r/LocalLLaMA/comments/1lt1z1a/how_do_i_see_my_tokens_per_second_speed_im_using/
sourpatchgrownadults
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt1z1a
false
null
t3_1lt1z1a
/r/LocalLLaMA/comments/1lt1z1a/how_do_i_see_my_tokens_per_second_speed_im_using/
false
false
self
1
null
Tesla V100 inference quality is better than A100+FA2 on Mistral Small 3.2 24B model for coding tasks.
1
[removed]
2025-07-06T13:29:40
https://www.reddit.com/r/LocalLLaMA/comments/1lt1a8a/tesla_v100_inference_quality_is_better_than/
Status_Contest39
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt1a8a
false
null
t3_1lt1a8a
/r/LocalLLaMA/comments/1lt1a8a/tesla_v100_inference_quality_is_better_than/
false
false
self
1
null
Are Qwen3 Embedding GGUF faulty?
35
Qwen3 Embedding has great retrieval results on [MTEB](https://huggingface.co/spaces/mteb/leaderboard). However, I tried it in [llama.cpp](https://huggingface.co/Qwen/Qwen3-Embedding-8B-GGUF). The results were much worse than competitors. I have an FAQ benchmark that looks a bit like this: | Model | Score | |-----...
2025-07-06T13:27:20
https://www.reddit.com/r/LocalLLaMA/comments/1lt18hg/are_qwen3_embedding_gguf_faulty/
espadrine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt18hg
false
null
t3_1lt18hg
/r/LocalLLaMA/comments/1lt18hg/are_qwen3_embedding_gguf_faulty/
false
false
self
35
{'enabled': False, 'images': [{'id': 'hINCyazmugT5nd39NF13gjbN1S3l4nlzHPyy65fQcLI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/baCsbR9uKGr8sPQTJLj1kB78WBO9D8gWFjpdI40flNU.jpg?width=108&crop=smart&auto=webp&s=1a11b8bcd62ef98053ac2a15b9dab3c88c969ff4', 'width': 108}, {'height': 116, 'url': 'h...
I built ccundo - instantly undo Claude Code's mistakes without wasting tokens
14
Got tired of Claude Code making changes I didn't want, then having to spend more tokens asking it to fix things. So I made **ccundo** \- an npm package that lets you quickly undo Claude Code operations with previews and cascading safety. npm install -g ccundo ccundo list # see recent operations cc...
2025-07-06T13:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1lt13ht/i_built_ccundo_instantly_undo_claude_codes/
Competitive-Noise905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt13ht
false
null
t3_1lt13ht
/r/LocalLLaMA/comments/1lt13ht/i_built_ccundo_instantly_undo_claude_codes/
false
false
self
14
{'enabled': False, 'images': [{'id': '6UiXZfBdSRQ1PJxZrLYqXqLmvvPm-au6DrNcHt2Aijc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6UiXZfBdSRQ1PJxZrLYqXqLmvvPm-au6DrNcHt2Aijc.png?width=108&crop=smart&auto=webp&s=75a98b3388791d5a3f3f074ee677bc3bae984102', 'width': 108}, {'height': 108, 'url': 'h...
Need an inference endpoint students can set up and use to test n8n workflows for an AI class, what free or non-GPU options are available?
8
I’m in an AI Masters program that is just getting off the ground and I’m trying to help one of my professors locate resources that can be used for class projects. We used the free GPU resources on Google Colab for some model training and such, but now we need inference endpoints and I’m not sure if Colab supports tha...
2025-07-06T13:15:25
https://www.reddit.com/r/LocalLLaMA/comments/1lt0z6j/need_an_inference_endpoint_students_can_set_up/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt0z6j
false
null
t3_1lt0z6j
/r/LocalLLaMA/comments/1lt0z6j/need_an_inference_endpoint_students_can_set_up/
false
false
self
8
null
Mistral small 24B 3.2 VS Qwen 3 30b/14b
6
Hey Llamas! Which one of these would theoretically be best for summarizing (one big prompt) swedish excel documents? Its on 16 gb vram machine, ouput is in json format. (structured data) Been checking benchmarks etc but its hard to find a good answer, non reasoning is what i think il go with and about 12-15 k context...
2025-07-06T12:41:25
https://www.reddit.com/r/LocalLLaMA/comments/1lt0a4n/mistral_small_24b_32_vs_qwen_3_30b14b/
AlbionPlayerFun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lt0a4n
false
null
t3_1lt0a4n
/r/LocalLLaMA/comments/1lt0a4n/mistral_small_24b_32_vs_qwen_3_30b14b/
false
false
self
6
null
اكتشف أسرار موقع SAND.AI المجاني | بديل GOOGLE VEO3
1
2025-07-06T12:38:16
https://youtu.be/DQrK4Wt3aQA
Common-Weather-7416
youtu.be
1970-01-01T00:00:00
0
{}
1lt07z6
false
{'oembed': {'author_name': 'NOVA HUB', 'author_url': 'https://www.youtube.com/@NOVAHUB-t3x', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/DQrK4Wt3aQA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ...
t3_1lt07z6
/r/LocalLLaMA/comments/1lt07z6/اكتشف_أسرار_موقع_sandai_المجاني_بديل_google_veo3/
false
false
https://external-preview…90d9ba6ffab56cfe
1
{'enabled': False, 'images': [{'id': 'IjjyHjuhY2T-rnpL5jdwVTb8_SJJNbWXimSjN_CYlzA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IjjyHjuhY2T-rnpL5jdwVTb8_SJJNbWXimSjN_CYlzA.jpeg?width=108&crop=smart&auto=webp&s=73dc42a45ffbfb965fe8407627a98fdbeabf0577', 'width': 108}, {'height': 162, 'url': '...
Qwen3 sees a breakthrough in programming & AI development when presented with just a couple of links about Rama and Electric
0
\[I am not associated in any way with Red Planet Labs or Hyperfiddle, other than using their products for my own dev tasks and feeling the happiest ever\]
2025-07-06T12:30:19
https://chat.qwen.ai/s/ea32ba23-3331-4661-9ea0-6d6daa56fde5?spm=a2ty_o01.29997173.0.0.5be4c921J4VyBz&fev=0.0.128
lispweaver
chat.qwen.ai
1970-01-01T00:00:00
0
{}
1lt02gs
false
null
t3_1lt02gs
/r/LocalLLaMA/comments/1lt02gs/qwen3_sees_a_breakthrough_in_programming_ai/
false
false
default
0
null
Looking for an open-source TTS model for multi-hour, multilingual audio generation
0
Hi everyone, I’m building an AI-powered education platform and **looking for** a high-quality **open-source TTS** model that meets the following needs: 1. ✅ **Voice cloning support** — ability to clone voices from short samples 2. ✅ Can generate **3–4 hours of audio per user**, even if it requires splitting the text ...
2025-07-06T11:45:18
https://www.reddit.com/r/LocalLLaMA/comments/1lsz9iu/looking_for_an_opensource_tts_model_for_multihour/
seozler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsz9iu
false
null
t3_1lsz9iu
/r/LocalLLaMA/comments/1lsz9iu/looking_for_an_opensource_tts_model_for_multihour/
false
false
self
0
null
Huawei's Pangu AI Rocked by Unverified Claims of Fraud from Alleged Team Member
297
[https://github.com/HW-whistleblower/True-Story-of-Pangu](https://github.com/HW-whistleblower/True-Story-of-Pangu) after reading the traslation of this article, I found there're many details, is it possible true or just a fake story?
2025-07-06T11:36:43
https://www.reddit.com/r/LocalLLaMA/comments/1lsz4hk/huaweis_pangu_ai_rocked_by_unverified_claims_of/
Rich-Mushroom-8360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsz4hk
false
null
t3_1lsz4hk
/r/LocalLLaMA/comments/1lsz4hk/huaweis_pangu_ai_rocked_by_unverified_claims_of/
false
false
self
297
{'enabled': False, 'images': [{'id': 'zsmjMuXowo6jxdudGrk9wGSfEsVxthNq-2hajJYOL-E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zsmjMuXowo6jxdudGrk9wGSfEsVxthNq-2hajJYOL-E.png?width=108&crop=smart&auto=webp&s=0b71d8efca52378ba322ecd875577a0af62ed340', 'width': 108}, {'height': 108, 'url': 'h...
Getting started with local AI
13
Hey everyone! I want to get started with local AI, and I’m looking for advice on where to begin. I'm reading some of the other posts about the same, but seeing how quickly AI advances I figured I'd ask. I’ve been looking at the smaller models like Llama and Deepseek's 8b. Apparently one is as small as 1.5b.... Th...
2025-07-06T11:28:01
https://www.reddit.com/r/LocalLLaMA/comments/1lsyza0/getting_started_with_local_ai/
Bitter-Ad640
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsyza0
false
null
t3_1lsyza0
/r/LocalLLaMA/comments/1lsyza0/getting_started_with_local_ai/
false
false
self
13
{'enabled': False, 'images': [{'id': 'L0IbFLU4mAFgCcd4enGCZMd9nPmPHGGAMFM5XYUSqpU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/L0IbFLU4mAFgCcd4enGCZMd9nPmPHGGAMFM5XYUSqpU.png?width=108&crop=smart&auto=webp&s=0bec92da8b7c865fcbfd67b314e99e5bd33f293c', 'width': 108}, {'height': 121, 'url': 'h...
Is that possible built a local gemini-cli totally in local and workable?
0
Which means it has to fullfill 2 requirements: - small, as it needs runing local, ideally no more than 2B; - able to do agents work, means it shouldn't be very dumb; eventhough you might ask why not using cloud api, well, it's a typical question about data sensetive and price. Just wanna talk about if this is a tren...
2025-07-06T10:51:26
https://www.reddit.com/r/LocalLLaMA/comments/1lsye88/is_that_possible_built_a_local_geminicli_totally/
LewisJin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsye88
false
null
t3_1lsye88
/r/LocalLLaMA/comments/1lsye88/is_that_possible_built_a_local_geminicli_totally/
false
false
self
0
null
After Huawei Pangu LLM faced plagiarism allegations, an anonymous insider shares their side of the story
2
Disclaimer: This post is a translation of a long piece written by an anonymous user claiming to be a former Huawei Noah’s Ark Lab employee involved in the development of the Pangu LLM. The authenticity of the claims cannot be independently verified, so please read with caution. The original GitHub repo: https://github...
2025-07-06T10:49:26
https://www.reddit.com/r/LocalLLaMA/comments/1lsyd4g/after_huawei_pangu_llm_faced_plagiarism/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsyd4g
false
null
t3_1lsyd4g
/r/LocalLLaMA/comments/1lsyd4g/after_huawei_pangu_llm_faced_plagiarism/
false
false
self
2
null
Python Implementation of Google's MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encodings
69
[https://github.com/sigridjineth/muvera-py](https://github.com/sigridjineth/muvera-py) I have created the Python implementation was created to make the FDE algorithm more accessible while maintaining complete fidelity to the original C++ implementation. Every function and parameter has been carefully mapped to ensure...
2025-07-06T10:20:49
https://www.reddit.com/r/LocalLLaMA/comments/1lsxxo2/python_implementation_of_googles_muvera/
Ok_Rub1689
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsxxo2
false
null
t3_1lsxxo2
/r/LocalLLaMA/comments/1lsxxo2/python_implementation_of_googles_muvera/
false
false
self
69
{'enabled': False, 'images': [{'id': 'qW4H0tYiPRjY0TfHl5hzFgZl3CMSTtI28A-dMyyvJbo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qW4H0tYiPRjY0TfHl5hzFgZl3CMSTtI28A-dMyyvJbo.png?width=108&crop=smart&auto=webp&s=516efe9a10aa7972200f652e55f77b220e88355a', 'width': 108}, {'height': 108, 'url': 'h...
Is this good enough for AI work?
0
I am just getting started with Ollama, after Jan and Gpt4all. Where should i begin?
2025-07-06T09:47:45
https://i.redd.it/1ewapxek68bf1.jpeg
ZealousidealDish7334
i.redd.it
1970-01-01T00:00:00
0
{}
1lsxfpt
false
null
t3_1lsxfpt
/r/LocalLLaMA/comments/1lsxfpt/is_this_good_enough_for_ai_work/
false
false
https://b.thumbs.redditm…o5xj4H0nvd3o.jpg
0
{'enabled': True, 'images': [{'id': 'E1_OhoRNqfL2n6cu2P9iqF13D6cPh_oqFQZsfqeqU0g', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/1ewapxek68bf1.jpeg?width=108&crop=smart&auto=webp&s=a1ff457cf51ad98a2bff0ef379d704fe67fb837e', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/1ewapxek68bf1.jpe...
Advice Needed: Building an In-House LLM System Using Latest Tech — Recommendations?
0
I'm currently working on setting up an **in-house Large Language Model (LLM) system** for internal organizational projects. Given the rapid advancements in AI technology, I’d greatly value your professional insights and recommendations to ensure we're leveraging the latest tools and methods effectively. **Here's our c...
2025-07-06T09:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1lsx9pn/advice_needed_building_an_inhouse_llm_system/
No_Edge2098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsx9pn
false
null
t3_1lsx9pn
/r/LocalLLaMA/comments/1lsx9pn/advice_needed_building_an_inhouse_llm_system/
false
false
self
0
null
Recommendations for running llamacpp on Apple M4 Pro
1
[removed]
2025-07-06T09:26:24
https://www.reddit.com/r/LocalLLaMA/comments/1lsx4wn/recommendations_for_running_llamacpp_on_apple_m4/
gyzerok
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsx4wn
false
null
t3_1lsx4wn
/r/LocalLLaMA/comments/1lsx4wn/recommendations_for_running_llamacpp_on_apple_m4/
false
false
self
1
null
gemini-cli: falling back to gemini-flash is the best marketing strategy Anthropic could have dreamed of for claude-code.
30
https://preview.redd.it/…at do you think?
2025-07-06T08:54:21
https://www.reddit.com/r/LocalLLaMA/comments/1lswnto/geminicli_falling_back_to_geminiflash_is_the_best/
PieBru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lswnto
false
null
t3_1lswnto
/r/LocalLLaMA/comments/1lswnto/geminicli_falling_back_to_geminiflash_is_the_best/
false
false
https://b.thumbs.redditm…8BpiZvSZgM8I.jpg
30
null
What are some good in-browser inference tools for small LLMs? (Use case: JSON to Chart.js config)
7
Hey folks, I’m exploring some ideas around running small LLMs entirely **in the browser**, and wanted to ask for suggestions or experiences with lightweight inference frameworks. The main use case I’m playing with is: 1. **(Priority)** Taking a JSON object and generating a valid [Chart.js](https://www.chartjs.org/)...
2025-07-06T08:48:35
https://www.reddit.com/r/LocalLLaMA/comments/1lswkv4/what_are_some_good_inbrowser_inference_tools_for/
callmedevilthebad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lswkv4
false
null
t3_1lswkv4
/r/LocalLLaMA/comments/1lswkv4/what_are_some_good_inbrowser_inference_tools_for/
false
false
self
7
{'enabled': False, 'images': [{'id': '-uEak470UG8Alrel1Gf3FXYft0yuiaCfUYkcL8CkATo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-uEak470UG8Alrel1Gf3FXYft0yuiaCfUYkcL8CkATo.png?width=108&crop=smart&auto=webp&s=627603f6e59ec463c19125b1400307e37eed7cdf', 'width': 108}, {'height': 113, 'url': 'h...
Upgrade for my 4060ti
0
Hello people. I have a 4060ti for local Inference. The card is doing just fine considering the allocated budget. I'm thinking a second card to pair with it so I can utilize longer context and/or bigger models. The two options I consider is a second 4060ti or a 5060ti (my budget is tight) What do you think? Any other su...
2025-07-06T08:41:21
https://www.reddit.com/r/LocalLLaMA/comments/1lswhaj/upgrade_for_my_4060ti/
Former-Tangerine-723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lswhaj
false
null
t3_1lswhaj
/r/LocalLLaMA/comments/1lswhaj/upgrade_for_my_4060ti/
false
false
self
0
null
Looking for an AI client
0
For quite some months I tried resisting the urge to code another client for local AI inference. I tried quite a lot of these clients like ChatBox, Msty and many more but I still haven't found the one solution that clicks for me. I would love to have an AI quickly at hand when I'm at my desktop for any kind of quick in...
2025-07-06T08:27:01
https://www.reddit.com/r/LocalLLaMA/comments/1lswa0q/looking_for_an_ai_client/
waescher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lswa0q
false
null
t3_1lswa0q
/r/LocalLLaMA/comments/1lswa0q/looking_for_an_ai_client/
false
false
self
0
null
Run Large LLMs on RunPod with text-generation-webui – Full Setup Guide + Template
15
Hey everyone! I usually rent GPUs from the cloud since I don’t want to make the investment in expensive hardware. Most of the time, I use RunPod when I need extra compute for LLM inference, ComfyUI, or other GPU-heavy tasks. For LLMs, I personally use text-generation-webui as the backend and either test models direct...
2025-07-06T08:26:43
https://www.reddit.com/r/LocalLLaMA/comments/1lsw9vz/run_large_llms_on_runpod_with_textgenerationwebui/
abandonedexplorer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsw9vz
false
null
t3_1lsw9vz
/r/LocalLLaMA/comments/1lsw9vz/run_large_llms_on_runpod_with_textgenerationwebui/
false
false
self
15
{'enabled': False, 'images': [{'id': '4lm8qt3U8kLoO2Fd6yvrls6oC59wE0X3JqIOCIaPvw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4lm8qt3U8kLoO2Fd6yvrls6oC59wE0X3JqIOCIaPvw0.png?width=108&crop=smart&auto=webp&s=dc9ecd3bfc83c2924292457223c42ce5f67b31a0', 'width': 108}, {'height': 108, 'url': 'h...
Creating a Knowledge Base for Agentic Research Architect
1
Sorry if this sounds dumb lol My organisation is researching/attempting to create AI agents that can act as software architects and help in designing softwares. This is an already established product and we get a lot of new feature requests on top of it. So basically, this agent would need the understanding of the c...
2025-07-06T07:54:49
https://www.reddit.com/r/LocalLLaMA/comments/1lsvsw0/creating_a_knowledge_base_for_agentic_research/
dew_chiggi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsvsw0
false
null
t3_1lsvsw0
/r/LocalLLaMA/comments/1lsvsw0/creating_a_knowledge_base_for_agentic_research/
false
false
self
1
null
Help choosing LLM
2
Heelo, im making a project where llm might have to deal with geospatial data, raster like. Dealing with formalts like Map Tiles, geojason etc. (Algo RAG implementations) for this i need an LLM but an so confused which one to use. Llama and Mistral both have so many models that im confused. It must be free to use via...
2025-07-06T07:29:01
https://www.reddit.com/r/LocalLLaMA/comments/1lsvff1/help_choosing_llm/
BESTHARSH004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsvff1
false
null
t3_1lsvff1
/r/LocalLLaMA/comments/1lsvff1/help_choosing_llm/
false
false
self
2
null
Using local LLM for anonymizing prompts before sending to cloud LLM - are there any open source solutions?
2
I want to use flagship models for coding, without worrying that some personal/business specific data leaks to cloud. Was thinking maybe there is a solution that would do something like this: local model: * detects personal or business specific data in prompts, * creates mapping dictionary * warns if replace is not f...
2025-07-06T07:13:58
https://www.reddit.com/r/LocalLLaMA/comments/1lsv7j1/using_local_llm_for_anonymizing_prompts_before/
cesarean722
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsv7j1
false
null
t3_1lsv7j1
/r/LocalLLaMA/comments/1lsv7j1/using_local_llm_for_anonymizing_prompts_before/
false
false
self
2
null
Update on spinning ball in hexagon test
0
Chat is this AGI? https://reddit.com/link/1lsv6hn/video/a7rfwluue7bf1/player I used the prompt from [this Reddit post](https://www.reddit.com/r/LocalLLaMA/comments/1j7r47l/i_just_made_an_animation_of_a_ball_bouncing/) and [made the visualization here](https://www.designarena.ai/play)
2025-07-06T07:12:02
https://www.reddit.com/r/LocalLLaMA/comments/1lsv6hn/update_on_spinning_ball_in_hexagon_test/
grx_xce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsv6hn
false
null
t3_1lsv6hn
/r/LocalLLaMA/comments/1lsv6hn/update_on_spinning_ball_in_hexagon_test/
false
false
self
0
{'enabled': False, 'images': [{'id': 'jE-pjhn8f_A-ZGcTOHrND6AkqRiY9pmOGew0nsRpqX0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jE-pjhn8f_A-ZGcTOHrND6AkqRiY9pmOGew0nsRpqX0.png?width=108&crop=smart&auto=webp&s=f886d2bbf1cdd040f8862148ca2375da2ed1228a', 'width': 108}, {'height': 216, 'url': '...
Anyone building a local coding cli or coding agent?
4
I just broke the ground on mine. I used copilot a bit 2 years ago when it was pretty new but preferred cut & paste, then I did [continue.dev](http://continue.dev) a bit, then back to cut & paste. Did aider a bit, then ... None of them really hit the sweet spot for me, so I decided to roll my own, might not be as go...
2025-07-06T06:30:07
https://www.reddit.com/r/LocalLLaMA/comments/1lsuje6/anyone_building_a_local_coding_cli_or_coding_agent/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsuje6
false
null
t3_1lsuje6
/r/LocalLLaMA/comments/1lsuje6/anyone_building_a_local_coding_cli_or_coding_agent/
false
false
self
4
null
Taught AI Agents Live for 15 hours | No fluff
0
https://i.redd.it/2acsykc327bf1.gif 15 hours of live, deep content. No fluff. You can watch the lecture recordings here: (1) What are AI Agents: [https://youtu.be/1SsoU8L\_hlw](https://youtu.be/1SsoU8L_hlw) (2) Inside the brain of AI Agents - How Large Language Models work: [https://youtu.be/dyfyOpxsAnE](https://...
2025-07-06T06:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1lsu2ks/taught_ai_agents_live_for_15_hours_no_fluff/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsu2ks
false
{'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/1SsoU8L_hlw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pictu...
t3_1lsu2ks
/r/LocalLLaMA/comments/1lsu2ks/taught_ai_agents_live_for_15_hours_no_fluff/
false
false
https://b.thumbs.redditm…2P4pZcweswbA.jpg
0
{'enabled': False, 'images': [{'id': 'TrKUDmajpg_F9rCldlqRUJW4s5_YXZadSe_Ay9_bZG8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/TrKUDmajpg_F9rCldlqRUJW4s5_YXZadSe_Ay9_bZG8.jpeg?width=108&crop=smart&auto=webp&s=0e2fb4c2a84cf2da7dde0a33b38e59cab70c422c', 'width': 108}, {'height': 162, 'url': '...
Why does LLaMA suck so much at frontend?
0
I gave the exact same prompt to GPT 4.1 (which I don't even think is that good) and Llama 4 Maverick [here](https://www.designarena.ai/vote), and the difference was insane. Honestly, how and why is Llama this behind? Prompt was "Build a shadcn ui with gsap for smooth transition for a personal portfolio for Softwar...
2025-07-06T05:21:26
https://www.reddit.com/gallery/1lstg8c
grx_xce
reddit.com
1970-01-01T00:00:00
0
{}
1lstg8c
false
null
t3_1lstg8c
/r/LocalLLaMA/comments/1lstg8c/why_does_llama_suck_so_much_at_frontend/
false
false
https://b.thumbs.redditm…zZ2fL9o_BScc.jpg
0
null
Ollama API image payload format for python
0
Hi guys, is this the correct python payload format for ollama? { "role": "user", "content": "what is in this image?", "images": ["iVBORw0KQuS..."] #base64 } I am asking because for both openrouter and ollama running the same gemma12b passed the same input and image encodings, openrouter ret...
2025-07-06T04:58:17
https://www.reddit.com/r/LocalLLaMA/comments/1lst2uk/ollama_api_image_payload_format_for_python/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lst2uk
false
null
t3_1lst2uk
/r/LocalLLaMA/comments/1lst2uk/ollama_api_image_payload_format_for_python/
false
false
self
0
null
Check out my reverse vibe coding approach
0
I call that « Tatin vibe coding », in an exquisite reference to French cuisine ;) Lemme know your thoughts ! https://youtu.be/YMpnvbJLoyw?si=AyoZxBuZ4bnelzAc
2025-07-06T04:50:44
https://www.reddit.com/r/LocalLLaMA/comments/1lssymd/check_out_my_reverse_vibe_coding_approach/
UpstairsCurrency
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lssymd
false
null
t3_1lssymd
/r/LocalLLaMA/comments/1lssymd/check_out_my_reverse_vibe_coding_approach/
false
false
self
0
{'enabled': False, 'images': [{'id': 'n_Z3QOY3zLB0UMSrU8U3_BBst85-efI5CdGd0I3WF-I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/n_Z3QOY3zLB0UMSrU8U3_BBst85-efI5CdGd0I3WF-I.jpeg?width=108&crop=smart&auto=webp&s=2a8932edbd3e564e858afca3f47b841f719da9ab', 'width': 108}, {'height': 162, 'url': '...
Vibecoding: Exploring Dynamic Quantization for LLMs: My PoC with Qwen-0.6B
0
Note: The following was generated via Gemini, simply because I am lazy and don't wanna summarize things personally. You can view the code [Here](https://pastebin.com/82Cn7022), and the text output comparisons [Here](https://pastebin.com/73Zn2bP4) I used the Puffin dataset for the Proof of concept, all in all it at lea...
2025-07-06T04:17:37
https://www.reddit.com/r/LocalLLaMA/comments/1lsses1/vibecoding_exploring_dynamic_quantization_for/
jasonmbrown
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsses1
false
null
t3_1lsses1
/r/LocalLLaMA/comments/1lsses1/vibecoding_exploring_dynamic_quantization_for/
false
false
self
0
null
Fine-tuning Qwen3-32B for sentiment analysis.
1
Title. Anyone here experienced with using this model for text classification here?
2025-07-06T04:04:08
https://www.reddit.com/r/LocalLLaMA/comments/1lss6b9/finetuning_qwen332b_for_sentiment_analysis/
Known_Bed_8000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lss6b9
false
null
t3_1lss6b9
/r/LocalLLaMA/comments/1lss6b9/finetuning_qwen332b_for_sentiment_analysis/
false
false
self
1
null
Llama & GRAMPS
1
I can’t code/program (at least not yet). Is anyone building tools/abilities to use a FOSS LLM like Llama to integrate with the family tree software GRAMPS? I’m thinking you could talk to Llama (ie 3.1 or 3.3) in plain English information about family members, relationships, events, locations, etc and Llama automati...
2025-07-06T02:42:23
https://www.reddit.com/r/LocalLLaMA/comments/1lsqr9n/llama_gramps/
AdCompetitive6193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsqr9n
false
null
t3_1lsqr9n
/r/LocalLLaMA/comments/1lsqr9n/llama_gramps/
false
false
self
1
null
I built a platform to collect & solve real-world AI automation use cases – would love your feedback!
2
2025-07-06T02:10:13
https://aisolutionscamp.io
disappead
aisolutionscamp.io
1970-01-01T00:00:00
0
{}
1lsq6xi
false
null
t3_1lsq6xi
/r/LocalLLaMA/comments/1lsq6xi/i_built_a_platform_to_collect_solve_realworld_ai/
false
false
default
2
null
Larger model on CPU or small model on GPU
2
I have a ryzen AI 7h CPU (with 50 TOPS NPU) with 64gb DDR5 RAM or an RTX5070 with 8gb DDR7. Should I run inference off of GPU or CPU for better performance?
2025-07-06T02:03:33
https://www.reddit.com/r/LocalLLaMA/comments/1lsq2m3/larger_model_on_cpu_or_small_model_on_gpu/
No_Professional_582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsq2m3
false
null
t3_1lsq2m3
/r/LocalLLaMA/comments/1lsq2m3/larger_model_on_cpu_or_small_model_on_gpu/
false
false
self
2
null
128GB VRAM for ~$600. Qwen3 MOE 235B.A22B reaching 20 t/s. 4x AMD MI50 32GB.
348
Hi everyone, Last year I posted about 2x MI60 performance. Since then, I bought more cards and PCIE riser cables to build a rack with 8x AMD MI50 32GB cards. My motherboard (Asus rog dark hero viii with AMD 5950x CPU and 96GB 3200Mhz RAM) had stability issues with 8x MI50 (does not boot), so I connected four (or somet...
2025-07-06T01:59:10
https://www.reddit.com/r/LocalLLaMA/comments/1lspzn3/128gb_vram_for_600_qwen3_moe_235ba22b_reaching_20/
MLDataScientist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lspzn3
false
null
t3_1lspzn3
/r/LocalLLaMA/comments/1lspzn3/128gb_vram_for_600_qwen3_moe_235ba22b_reaching_20/
false
false
self
348
{'enabled': False, 'images': [{'id': 'hNPAr3eIAChvCruA_30RyOLRAM__-hwPLVex8tW4YLU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hNPAr3eIAChvCruA_30RyOLRAM__-hwPLVex8tW4YLU.png?width=108&crop=smart&auto=webp&s=5e4c9f1d82654452ab9abf4c2dfaa69dd9495bbf', 'width': 108}, {'height': 108, 'url': 'h...
I built a RAG-powered knowledge base for docs of my project using FastAPI + Ollama. Here's what I learned.
2
I'm a beginner developer who just completed my first AI project. In past, I almost dedicated to traditional frontend, backend and toolchain development and know a little knowledges about AI. Recently, I'm working for a toolchain project of myself and compositing its documents. An idea suddenly emerges, I could utilize ...
2025-07-06T01:01:20
https://www.reddit.com/r/LocalLLaMA/comments/1lsox8o/i_built_a_ragpowered_knowledge_base_for_docs_of/
Ansurfen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsox8o
false
null
t3_1lsox8o
/r/LocalLLaMA/comments/1lsox8o/i_built_a_ragpowered_knowledge_base_for_docs_of/
false
false
self
2
null
All i said was hello lol
122
\> ollama run phi4-mini-reasoning:3.8b \>>> hello <think> Okay, let's see what the user needs here. The message says "You will be given a problem." but then it just has "hello". Hmm, maybe there was a typo or the problem didn't get sent correctly. Let me check again. Wait, the user's name is Phi, an AI math exp...
2025-07-06T00:35:44
https://www.reddit.com/r/LocalLLaMA/comments/1lsofwq/all_i_said_was_hello_lol/
numinouslymusing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsofwq
false
null
t3_1lsofwq
/r/LocalLLaMA/comments/1lsofwq/all_i_said_was_hello_lol/
false
false
self
122
null
Jan.AI with Ollama (working solution)
0
As title states I tried to find the way to use Jan AI with ollama available local models but I didn't found the working way. After lot of trial and error I found working way forwared and document in a blog post [Jan.AI with Ollama (working solution)](https://developers.knowivate.com/@kheersagar/jan-ai-with-ollama-wor...
2025-07-06T00:35:20
https://www.reddit.com/r/LocalLLaMA/comments/1lsoflk/janai_with_ollama_working_solution/
InsideResolve4517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsoflk
false
null
t3_1lsoflk
/r/LocalLLaMA/comments/1lsoflk/janai_with_ollama_working_solution/
false
false
self
0
{'enabled': False, 'images': [{'id': '91QifmmkYXf4kont-M3pku_SgKmkEq-VLJGmQTJzr_I', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/91QifmmkYXf4kont-M3pku_SgKmkEq-VLJGmQTJzr_I.png?width=108&crop=smart&auto=webp&s=3c478b3adde7eeada968ed24f96e70509dcbda47', 'width': 108}, {'height': 216, 'url': '...
Intel Project Battlematrix
1
Up to 8x B60 pro, 24GB VRAM 456 GB/s apiece. Price point unknown
2025-07-06T00:20:00
https://www.intel.com/content/www/us/en/developer/articles/technical/introduction-project-battlematrix.html
evil0sheep
intel.com
1970-01-01T00:00:00
0
{}
1lso57g
false
null
t3_1lso57g
/r/LocalLLaMA/comments/1lso57g/intel_project_battlematrix/
false
false
default
1
null
Is Codestral 22B still the best open LLM for local coding on 32–64 GB VRAM?
104
I'm looking for the best open-source LLM for local use, focused on programming. I have a 2 RTX 5090. Is Codestral 22B still the best choice for local code related tasks (code completion, refactoring, understanding context etc.), or are there better alternatives now like DeepSeek-Coder V2, StarCoder2, or WizardCoder? ...
2025-07-05T23:13:57
https://www.reddit.com/r/LocalLLaMA/comments/1lsmtzr/is_codestral_22b_still_the_best_open_llm_for/
One-Stress-6734
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsmtzr
false
null
t3_1lsmtzr
/r/LocalLLaMA/comments/1lsmtzr/is_codestral_22b_still_the_best_open_llm_for/
false
false
self
104
null
GPU overclocking?
1
Is it beneficial for LLM inference? I have MSI Afterburner, wondering if there's any settings that would be beneficial for my 3060 ¯\\\_(ツ)\_/¯ It's not something I've seen discussed, so I'm *assuming* not, just figured I'd ask. Thanks!
2025-07-05T22:35:51
https://www.reddit.com/r/LocalLLaMA/comments/1lsm1yb/gpu_overclocking/
wpg4665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsm1yb
false
null
t3_1lsm1yb
/r/LocalLLaMA/comments/1lsm1yb/gpu_overclocking/
false
false
self
1
null
Why 5090 for inference if min CUDA is 12.9
0
Many AI models are built for lower CUDA versions, mostly 12.1-12.2 Why wouldn't I just buy 2x3090 that will end up with pretty much same speed with bigger vRAM?
2025-07-05T22:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1lsm0ua/why_5090_for_inference_if_min_cuda_is_129/
VihmaVillu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsm0ua
false
null
t3_1lsm0ua
/r/LocalLLaMA/comments/1lsm0ua/why_5090_for_inference_if_min_cuda_is_129/
false
false
self
0
null
Options for a lot of VRAM for local Ollama server?
0
I have an AMD build acting as a home server. Ryzen 5600G, 32GB RAM. I want a card with all the VRAM I can get, but I don't want to spend a lot. What are my options? I'm pretty new to all this. I see that MI50 cards are going for relatively cheap. Is that still a good option? 32GB is probably more than enough. I do NO...
2025-07-05T22:07:43
https://www.reddit.com/r/LocalLLaMA/comments/1lslglw/options_for_a_lot_of_vram_for_local_ollama_server/
mehgcap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lslglw
false
null
t3_1lslglw
/r/LocalLLaMA/comments/1lslglw/options_for_a_lot_of_vram_for_local_ollama_server/
false
false
self
0
null
My LLM Server
0
My LLM server, [https://generativa.rapport.tec.br](https://generativa.rapport.tec.br), my goal is to set up LLM servers for companies and freelancers who demand confidentiality in their documents, thus allowing a secure and personalized RAG.
2025-07-05T21:54:53
https://www.reddit.com/r/LocalLLaMA/comments/1lsl6p6/my_llm_server/
CarlosDelfino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsl6p6
false
null
t3_1lsl6p6
/r/LocalLLaMA/comments/1lsl6p6/my_llm_server/
false
false
self
0
null
Is this a good machine for running local LLMs?
0
I am getting openbox for $8369 which I guess is a good deal. My main concern is the cooling system used here. These machine are made for gaming. I am unable to find more details around the same.
2025-07-05T21:47:09
https://i.redd.it/4jpzeysml4bf1.png
sudocode14
i.redd.it
1970-01-01T00:00:00
0
{}
1lsl0qn
false
null
t3_1lsl0qn
/r/LocalLLaMA/comments/1lsl0qn/is_this_a_good_machine_for_running_local_llms/
false
false
default
0
{'enabled': True, 'images': [{'id': '4jpzeysml4bf1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/4jpzeysml4bf1.png?width=108&crop=smart&auto=webp&s=a72bd197759ab38dd0e7123700230fafe91d9594', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/4jpzeysml4bf1.png?width=216&crop=smart&auto=we...
Should I buy an appartment or 4 H100s
183
Why are they so expensive, has anybody here ever tested them? How many rtx 5090s are needed to match it's performance? What llm can we run entirely on one h100 with as much RAM as required? Naive questions but I am very confused
2025-07-05T21:14:39
https://www.reddit.com/r/LocalLLaMA/comments/1lskb8k/should_i_buy_an_appartment_or_4_h100s/
InfiniteEjaculation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lskb8k
false
null
t3_1lskb8k
/r/LocalLLaMA/comments/1lskb8k/should_i_buy_an_appartment_or_4_h100s/
false
false
self
183
null
(Updated) All‑in‑One Generative AI Template: Frontend, Backend, Docker, Docs & CI/CD + Ollama for local LLMs
1
Hey everyone! 👋 Here is a major update to my Generative AI Project Template : ⸻ 🚀 Highlights • Frontend built with NiceGUI for a robust, clean and interactive UI • Backend powered by FastAPI for high-performance API endpoints • Complete settings and environment management • Pre-configured Docker Compose set...
2025-07-05T20:58:40
https://www.reddit.com/r/LocalLLaMA/comments/1lsjy83/updated_allinone_generative_ai_template_frontend/
aminedjeghri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsjy83
false
null
t3_1lsjy83
/r/LocalLLaMA/comments/1lsjy83/updated_allinone_generative_ai_template_frontend/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Y56C7zAB3QPMW6uvJvAl6DX3xNJEv6Tcypn7-Km01Kc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y56C7zAB3QPMW6uvJvAl6DX3xNJEv6Tcypn7-Km01Kc.png?width=108&crop=smart&auto=webp&s=ed7bbfda0d4b9d48fc23ba78743f2000e5fa0193', 'width': 108}, {'height': 108, 'url': 'h...
Local LLM for Audio Cleanup
1
Trying to clean up audio voice profiles for chatterbox ai. Would like to run an AI to clean up isolate and clean up vocals. Tried a few premium online tools and myEdit ai works the best but don’t want to use a premium tool. Extra bonus if it can do other common audio tasks.
2025-07-05T20:53:37
https://www.reddit.com/r/LocalLLaMA/comments/1lsju4i/local_llm_for_audio_cleanup/
AnonTheGreat12345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsju4i
false
null
t3_1lsju4i
/r/LocalLLaMA/comments/1lsju4i/local_llm_for_audio_cleanup/
false
false
self
1
null
Some of the best proprietary models may be smaller than we thought
1
[removed]
2025-07-05T20:48:30
https://www.reddit.com/r/LocalLLaMA/comments/1lsjq2b/some_of_the_best_proprietary_models_may_be/
Cool-Chemical-5629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsjq2b
false
null
t3_1lsjq2b
/r/LocalLLaMA/comments/1lsjq2b/some_of_the_best_proprietary_models_may_be/
false
false
self
1
null
From The Foundations of Transformers to Scaling Vision Transformers
0
Inspired by the awesome work presented by Kathleen Kenealy on ViT benchmarks in PyTorch DDP and Jax TPUs by Google DeepMind, I developed this intensive article on the solid foundations to transformers, Vision Transformers, and Distributed Learning, and to say I learnt a lot would be an understatement. After a few revi...
2025-07-05T20:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1lsjm9l/from_the_foundations_of_transformers_to_scaling/
Des_goes_Brrr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsjm9l
false
null
t3_1lsjm9l
/r/LocalLLaMA/comments/1lsjm9l/from_the_foundations_of_transformers_to_scaling/
false
false
self
0
null
Some of the best proprietary models may be smaller than we thought
1
[removed]
2025-07-05T20:39:36
https://ashishchadha11944.medium.com/gemini-2-5-googles-revolutionary-leap-in-ai-architecture-performance-and-vision-c76afc4d6a06
Cool-Chemical-5629
ashishchadha11944.medium.com
1970-01-01T00:00:00
0
{}
1lsjj30
false
null
t3_1lsjj30
/r/LocalLLaMA/comments/1lsjj30/some_of_the_best_proprietary_models_may_be/
false
false
default
1
null
I built an AI system that validates other AI prompts - introducing quality standards to the prompt economy
1
[removed]
2025-07-05T20:33:58
https://www.reddit.com/r/LocalLLaMA/comments/1lsjej7/i_built_an_ai_system_that_validates_other_ai/
bcedeno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsjej7
false
null
t3_1lsjej7
/r/LocalLLaMA/comments/1lsjej7/i_built_an_ai_system_that_validates_other_ai/
false
false
self
1
null
Is there an easy way to continue pretraining of *just* the gate network of an MoE?
1
I would like to make a "clown-car" MoE as described by Goddard in https://goddard.blog/posts/clown-moe/ but after initializing the gates as he describes, I would like to perform continue pre-training on *just* the gates, not any of the expert weights. Do any of the easy-to-use training frameworks like Unsloth support ...
2025-07-05T20:31:06
https://www.reddit.com/r/LocalLLaMA/comments/1lsjc83/is_there_an_easy_way_to_continue_pretraining_of/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsjc83
false
null
t3_1lsjc83
/r/LocalLLaMA/comments/1lsjc83/is_there_an_easy_way_to_continue_pretraining_of/
false
false
self
1
null
What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company?
0
Please imagine the following: * You are a Software Developer in a medium sized company, let's say 500 employees with all of them doing the same kind of work (will become relevant later), except from you. You have no experience at all with machine learning or LLM. Everything is completely new for you. You have of cours...
2025-07-05T20:10:01
https://www.reddit.com/r/LocalLLaMA/comments/1lsivf4/what_is_the_necessary_time_effort_to_learn_to/
Independent_Hour_301
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsivf4
false
null
t3_1lsivf4
/r/LocalLLaMA/comments/1lsivf4/what_is_the_necessary_time_effort_to_learn_to/
false
false
self
0
null
Building MOE inference Optimized workstation with 2 5090’s
0
Hey everyone, I’m building a MOE optimized llm inference rig. My plans currently are GPU: 2x 5090’s (FE’s I got msrp from Best Buy) CPU: threadripper 7000 pro series Motherboard: trx50 or wrx 90 Memory: 512gb ddr5 Case: ideally rack mountable, not sure My performance target is a min of 20 t/s generation with DEEPSE...
2025-07-05T20:02:26
https://www.reddit.com/r/LocalLLaMA/comments/1lsipdy/building_moe_inference_optimized_workstation_with/
novel_market_21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsipdy
false
null
t3_1lsipdy
/r/LocalLLaMA/comments/1lsipdy/building_moe_inference_optimized_workstation_with/
false
false
self
0
null
9950X3D + RTX 5090 + 192 GB RAM , reasonable?
0
I am recently using my computer to write product reviews based on product images and text descriptions of items, im looking to maximize my hardware as well as generally play around with the largest models that I can run. Im looking to learn and explore as well as use this for practical applications like review writing....
2025-07-05T20:01:44
https://www.reddit.com/r/LocalLLaMA/comments/1lsiov1/9950x3d_rtx_5090_192_gb_ram_reasonable/
juggarjew
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsiov1
false
null
t3_1lsiov1
/r/LocalLLaMA/comments/1lsiov1/9950x3d_rtx_5090_192_gb_ram_reasonable/
false
false
self
0
null
PC build for LLM research
3
I am planning to build a pc for LLM Research not very big models but at least 3-7b model training and inference on 13-30b models. I am planning to build a 5070ti 16gb and probably add another 5070ti after a month. Any suggestions around the RAM, do i really need a top notch cpu ??
2025-07-05T19:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1lsiffa/pc_build_for_llm_research/
Financial_Web530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsiffa
false
null
t3_1lsiffa
/r/LocalLLaMA/comments/1lsiffa/pc_build_for_llm_research/
false
false
self
3
null
Open-sourced image description models (Object detection, OCR, Image processing, CNN) make LLMs SOTA in AI agentic benchmarks like Android World and Android Control
22
Yesterday, I finished evaluating my Android agent model, deki, on two separate benchmarks: Android Control and Android World. For both benchmarks I used a subset of the dataset without fine-tuning. The results show that image description models like deki enables large LLMs (like GPT-4o, GPT-4.1, and Gemini 2.5) to beco...
2025-07-05T19:31:10
https://www.reddit.com/gallery/1lsi0gj
Old_Mathematician107
reddit.com
1970-01-01T00:00:00
0
{}
1lsi0gj
false
null
t3_1lsi0gj
/r/LocalLLaMA/comments/1lsi0gj/opensourced_image_description_models_object/
false
false
default
22
null
Llama server completion not working correctly
1
I have a desktop on my LAN that I'm using for inference. I start ./llama-server on that desktop, and then submit queries using curl. However, when I submit queries using the "prompt" field, I get replies back that look like foundation model completions, rather than instruct completions. I assume this is because somethi...
2025-07-05T19:27:21
https://www.reddit.com/r/LocalLLaMA/comments/1lshxep/llama_server_completion_not_working_correctly/
claytonkb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lshxep
false
null
t3_1lshxep
/r/LocalLLaMA/comments/1lshxep/llama_server_completion_not_working_correctly/
false
false
self
1
null
Build vLLM on CUDA 12.9, Kernel 6.15.2, NVIDIA 575.64, PyTorch 2.9cu129 Nightly
0
Build vLLM on CUDA 12.9, Kernel 6.15.2, NVIDIA 575.64, PyTorch 2.9cu129 Nightly Let's fucking go!!!!!!!!
2025-07-05T19:03:45
https://www.reddit.com/r/LocalLLaMA/comments/1lshe4q/build_vllm_on_cuda_129_kernel_6152_nvidia_57564/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lshe4q
false
null
t3_1lshe4q
/r/LocalLLaMA/comments/1lshe4q/build_vllm_on_cuda_129_kernel_6152_nvidia_57564/
false
false
self
0
null
AI desktop configuration recommendations for RAG and LLM training
3
I'm trying to configure a workstation that I can do some AI dev work, in particular, RAG qualitative and quantitative analysis. I also need a system that I can use to prep many unstructured documents like pdfs and powerpoints, mostly marketing material for ingestion. I'm not quite sure as to how robust a system I shou...
2025-07-05T18:51:47
https://www.reddit.com/r/LocalLLaMA/comments/1lsh4a8/ai_desktop_configuration_recommendations_for_rag/
Square-Onion-1825
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsh4a8
false
null
t3_1lsh4a8
/r/LocalLLaMA/comments/1lsh4a8/ai_desktop_configuration_recommendations_for_rag/
false
false
https://b.thumbs.redditm…wCI7CAp8OSxs.jpg
3
null
Successfully Built My First PC for AI (Sourcing Parts from Alibaba - Under $1500!)
273
Building a PC was always one of those "someday" projects I never got around to. As a long-time Mac user, I honestly never had a real need for it. That all changed when I stumbled into the world of local AI. Suddenly, my 16GB Mac wasn't just slow, it was a hard bottleneck. So, I started mapping out what this new machin...
2025-07-05T18:39:10
https://www.reddit.com/r/LocalLLaMA/comments/1lsgtvy/successfully_built_my_first_pc_for_ai_sourcing/
Lowkey_LokiSN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsgtvy
false
null
t3_1lsgtvy
/r/LocalLLaMA/comments/1lsgtvy/successfully_built_my_first_pc_for_ai_sourcing/
false
false
https://b.thumbs.redditm…kxsiWZD3Lxxk.jpg
273
{'enabled': False, 'images': [{'id': 'qkqp42ZUQM3ud5lixeAGPwFgRk03v_VKVBUbGROr91s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/qkqp42ZUQM3ud5lixeAGPwFgRk03v_VKVBUbGROr91s.jpeg?width=108&crop=smart&auto=webp&s=9bff86ba77f876ea113af80eda2c7c1173584b76', 'width': 108}, {'height': 216, 'url': ...
SoTA Audio native models?
1
I know this is locallama but what is the SoTA speech to speech model right now? We've been testing with gemini 2.5 audio native preview at work and while it still has some issues, it's looking real good. Ive been limited to Gemini cause we got free GCP credits to play with at work.
2025-07-05T17:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1lsfv8c/sota_audio_native_models/
Theboyscampus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsfv8c
false
null
t3_1lsfv8c
/r/LocalLLaMA/comments/1lsfv8c/sota_audio_native_models/
false
false
self
1
null
Help setting up an uncensored local LLM for a text-based RPG adventure / DMing
4
I apologize if this is the Nth time something like this was posted, but I am just at my wit's end. As the title says, I need help setting up an uncensored local LLM for the purpose of running / DMing a single player text-based RPG adventure. I have tried online services like Kobold AI Lite, etc. but I always encounter ...
2025-07-05T17:50:29
https://www.reddit.com/r/LocalLLaMA/comments/1lsfpi0/help_setting_up_an_uncensored_local_llm_for_a/
tac7878
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsfpi0
false
null
t3_1lsfpi0
/r/LocalLLaMA/comments/1lsfpi0/help_setting_up_an_uncensored_local_llm_for_a/
false
false
self
4
null
I created this tool I named ReddSummary.com – just paste a link and boom you got the summary
13
I have developed the web app and chrome extension to summarize the long reddit threads discussion using chatgpt, it helps user to analyize thread discussions and sentiments of the discussion.
2025-07-05T17:46:40
https://i.redd.it/2exxosoue3bf1.png
Himanshu507
i.redd.it
1970-01-01T00:00:00
0
{}
1lsfmcj
false
null
t3_1lsfmcj
/r/LocalLLaMA/comments/1lsfmcj/i_created_this_tool_i_named_reddsummarycom_just/
false
false
default
13
{'enabled': True, 'images': [{'id': '2exxosoue3bf1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/2exxosoue3bf1.png?width=108&crop=smart&auto=webp&s=1fd832cbd0013c8cbdcec784df645584bcc52a5d', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/2exxosoue3bf1.png?width=216&crop=smart&auto=webp...
Anyone built a home 2× A100 SXM4 node?
9
I’m doing self-funded AI research and recently got access to 2× NVIDIA A100 SXM4 GPUs. I want to build a quiet, stable node at home to run local models and training workloads — no cloud. Has anyone here actually built a DIY system with A100 SXM4s (not PCIe)? If so: What HGX carrier board or server chassis did you use...
2025-07-05T17:45:34
https://www.reddit.com/r/LocalLLaMA/comments/1lsflii/anyone_built_a_home_2_a100_sxm4_node/
Fun_Nefariousness228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsflii
false
null
t3_1lsflii
/r/LocalLLaMA/comments/1lsflii/anyone_built_a_home_2_a100_sxm4_node/
false
false
self
9
null
What motherboard for 4xK80s?
1
I’m looking to build a budget experimentation machine for inference and perhaps training some multimodal models and such. I saw that there are lots of refurbished K80s available on eBay for quite cheap that appear to be in ok condition. I’m wondering what kind of backbone I would need to support say 4 or even 8x of the...
2025-07-05T17:42:38
https://www.reddit.com/r/LocalLLaMA/comments/1lsfj67/what_motherboard_for_4xk80s/
itsacommon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsfj67
false
null
t3_1lsfj67
/r/LocalLLaMA/comments/1lsfj67/what_motherboard_for_4xk80s/
false
false
self
1
null
Finetuning a youtuber persona without expensive hardware or buying expensive cloud computing
0
So, I want to finetune any model good or bad, into a youtuber persona My idea is i will download youtube videos of that youtuber and generate transcript and POFF! I have the youtuber data, now i just need train the model on that data My idea is Gemini have gems, can that be useful? If not, can i achieve my goal for fr...
2025-07-05T17:13:12
https://www.reddit.com/r/LocalLLaMA/comments/1lsevb1/finetuning_a_youtuber_persona_without_expensive/
Khushalgogia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsevb1
false
null
t3_1lsevb1
/r/LocalLLaMA/comments/1lsevb1/finetuning_a_youtuber_persona_without_expensive/
false
false
self
0
null
Open Source AI Finder & Newsletter
0
2025-07-05T17:04:22
https://www.coding-dude.com/wp/open-source-ai/
psd-dude
coding-dude.com
1970-01-01T00:00:00
0
{}
1lsenvs
false
null
t3_1lsenvs
/r/LocalLLaMA/comments/1lsenvs/open_source_ai_finder_newsletter/
false
false
default
0
null
New app for locally running AI models on Android your smartphone
18
Hi. I made a new app for locally running of AI models on Android smartphone. I am interested in your opinion. https://play.google.com/store/apps/details?id=com.romankryvolapov.offlineailauncher
2025-07-05T16:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1lsdxc2/new_app_for_locally_running_ai_models_on_android/
RomanKryvolapov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsdxc2
false
null
t3_1lsdxc2
/r/LocalLLaMA/comments/1lsdxc2/new_app_for_locally_running_ai_models_on_android/
false
false
self
18
{'enabled': False, 'images': [{'id': 'mzou-cvKbo89yySFKfh6cxlVrw7VRIQEkdJHPKwwKng', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/mzou-cvKbo89yySFKfh6cxlVrw7VRIQEkdJHPKwwKng.jpeg?width=108&crop=smart&auto=webp&s=50a2358f9b890643e86778f4cba80f28e095ae0b', 'width': 108}, {'height': 216, 'url': ...
Unethical
0
Obviously I have heard about this memed tweet, but I just saw that he said it is „unethical” … how do they even dare to talk about ethics? Icant, its so sad that the company that started AI revolution is OAI
2025-07-05T16:20:29
https://x.com/jquinonero/status/1940926946705395943?s=46
martinmazur
x.com
1970-01-01T00:00:00
0
{}
1lsdnin
false
null
t3_1lsdnin
/r/LocalLLaMA/comments/1lsdnin/unethical/
false
false
default
0
{'enabled': False, 'images': [{'id': 'wpHkx4IBY4f2SvtmsnhVr4jXBiXjBh0wFp1GAD4Ofq8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mr6r0zW37m2NxYrQHzvcRoXtyemAYa2PoklKZO-79BQ.jpg?width=108&crop=smart&auto=webp&s=36f51b1de6a5fde62dc977e8af18e795bb6d414c', 'width': 108}], 'source': {'height': 20...
Llama-4-Maverick 402B on a oneplus 13
153
Here's Llama-4-Maverick-17B-128E-Instruct on a oneplus 13, which used UFS 4.0 storage. Any phone will work, as long as the RAM size is sufficient for context and repeating layers. (8-12gb) Here's the command used: `./llama-cli -m Llama-4-Maverick-17B-128E-Instruct-UD-IQ1_M-00001-of-00003.gguf -t 6 -p "hi" -c 2048`...
2025-07-05T16:15:49
https://v.redd.it/tletuj5ov2bf1
Aaaaaaaaaeeeee
/r/LocalLLaMA/comments/1lsdjnb/llama4maverick_402b_on_a_oneplus_13/
1970-01-01T00:00:00
0
{}
1lsdjnb
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/tletuj5ov2bf1/DASHPlaylist.mpd?a=1754453759%2CZjI5NmI0OWZmOGYzZGE2NGM4NDU0YTRlOTgyOGY2YzUyNmRjYTIzMWUzN2I5NTc2ZDZlNGU1MmY5NmZkZGM3ZQ%3D%3D&v=1&f=sd', 'duration': 234, 'fallback_url': 'https://v.redd.it/tletuj5ov2bf1/DASH_720.mp4?source=fallback', 'h...
t3_1lsdjnb
/r/LocalLLaMA/comments/1lsdjnb/llama4maverick_402b_on_a_oneplus_13/
false
false
https://external-preview…36a4b813db77c3a7
153
{'enabled': False, 'images': [{'id': 'aGFxcDNhNW92MmJmMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aGFxcDNhNW92MmJmMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG.png?width=108&crop=smart&format=pjpg&auto=webp&s=66fb299c22385809a3328d96e048aabe44be...
Finding Uncensored models for some social media project
0
I am currently working on something related to social media data and wanna test a censored and uncensored models result on same data. Share models and if you used them, how good they are.
2025-07-05T16:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1lsd9t4/finding_uncensored_models_for_some_social_media/
Unlikely-Chicken3286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsd9t4
false
null
t3_1lsd9t4
/r/LocalLLaMA/comments/1lsd9t4/finding_uncensored_models_for_some_social_media/
false
false
self
0
null
GPU Choice for r730XD
0
I have an r730XD that I'm looking to convert into an LLM server, mostly just inference, maybe some training in the future, and I'm stuck on deciding on a GPU. The two I'm currently considering are the RTX 2000E Ada (16GB) or RTX 3090 (24GB). Both are about the same price. The 2000E is much newer, has a higher CUDA ve...
2025-07-05T15:32:23
https://www.reddit.com/r/LocalLLaMA/comments/1lsck2e/gpu_choice_for_r730xd/
gat0r87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsck2e
false
null
t3_1lsck2e
/r/LocalLLaMA/comments/1lsck2e/gpu_choice_for_r730xd/
false
false
self
0
null
When Should We Expect Affordable Hardware That Will Run Large LLMs With Usable Speed?
190
Its been years since local models started gaining traction and hobbyist experiment at home with cheaper hardware like multi 3090s and old DDR4 servers. But none of these solutions have been good enough, with multi gpus not having enough ram for large models such as DeepSeek and old server not having usable speeds. Whe...
2025-07-05T14:44:47
https://www.reddit.com/r/LocalLLaMA/comments/1lsbhzs/when_should_we_expect_affordable_hardware_that/
spiritxfly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsbhzs
false
null
t3_1lsbhzs
/r/LocalLLaMA/comments/1lsbhzs/when_should_we_expect_affordable_hardware_that/
false
false
self
190
null
Looking for open-source tool to blur entire bodies by gender in videos/images
0
I am looking for an open‑source AI tool that can run locally on my computer (CPU only, no GPU) and process videos and images with the following functionality: 1. The tool should take a video or image as input and output the same video/image with these options for blurring: * Blur the entire body of all men. * Bl...
2025-07-05T14:21:08
https://www.reddit.com/r/LocalLLaMA/comments/1lsazjq/looking_for_opensource_tool_to_blur_entire_bodies/
DayOk2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsazjq
false
null
t3_1lsazjq
/r/LocalLLaMA/comments/1lsazjq/looking_for_opensource_tool_to_blur_entire_bodies/
false
false
self
0
null
Utilize iGPU (AMD Radeon 780m) even if the dGPU is running via MUX switch
3
Hello! I'm wandering if it possible to use iGPU for inference in Windows despite the dGPU is online and connected to the Display. The whole idea that I can use idling iGPU for the AI tasks (small 7b models). The MUX switch itself is not limiting the iGPU for the general tasks (not related to the video renderin...
2025-07-05T13:51:24
https://www.reddit.com/r/LocalLLaMA/comments/1lsaczg/utilize_igpu_amd_radeon_780m_even_if_the_dgpu_is/
panther_ra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lsaczg
false
null
t3_1lsaczg
/r/LocalLLaMA/comments/1lsaczg/utilize_igpu_amd_radeon_780m_even_if_the_dgpu_is/
false
false
self
3
{'enabled': False, 'images': [{'id': '3AWK6ga3cDp-UjrXq3T9-DGYbB6vgHQDwg1qQQM3DFc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3AWK6ga3cDp-UjrXq3T9-DGYbB6vgHQDwg1qQQM3DFc.png?width=108&crop=smart&auto=webp&s=d5960d476ea4194f61592e054c2304a93e90b6cf', 'width': 108}, {'height': 108, 'url': 'h...
Fine tuning an LLM decoder for the first time, this is what I am going to do
1
**General information**: I am writing a paper that introduces a method for achieving higher-quality data, which may be relevant to training MLLMs in a specific domain. I plan to show that the method works by (1) collecting data using this new method, (2) fine-tuning an MLLM (or more than one), and (3) showing an improv...
2025-07-05T13:13:00
https://www.reddit.com/r/LocalLLaMA/comments/1ls9lfl/fine_tuning_an_llm_decoder_for_the_first_time/
David202023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls9lfl
false
null
t3_1ls9lfl
/r/LocalLLaMA/comments/1ls9lfl/fine_tuning_an_llm_decoder_for_the_first_time/
false
false
self
1
null
Which open source LLM has the most genuine sense of humor?
28
I'm genuinely struggling with everything out there in terms of making me smile and general joke quality. If there is such a model, at what settings should it run? (temp/top\_k etc).
2025-07-05T13:10:47
https://www.reddit.com/r/LocalLLaMA/comments/1ls9jvu/which_open_source_llm_has_the_most_genuine_sense/
UltrMgns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls9jvu
false
null
t3_1ls9jvu
/r/LocalLLaMA/comments/1ls9jvu/which_open_source_llm_has_the_most_genuine_sense/
false
false
self
28
null
Apple MLX Quantizations Royal Rumble 🔥
16
Qwen3-8B model using Winogrande as benchmark. DWQ and 5bit rule! 🥇 dwq – 68.82% 🥈 5bit – 68.51% 🥉 6bit – 68.35% bf16 – 67.64% dynamic – 67.56% 8bit – 67.56% 4bit – 66.30% 3bit – 63.85% https://preview.redd.it/95nyy1fby1bf1.png?width=1979&format=png&auto=webp&s=d6402294cedb1bdfc338ea34983203e711818...
2025-07-05T12:50:36
https://www.reddit.com/r/LocalLLaMA/comments/1ls95oj/apple_mlx_quantizations_royal_rumble/
ifioravanti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls95oj
false
null
t3_1ls95oj
/r/LocalLLaMA/comments/1ls95oj/apple_mlx_quantizations_royal_rumble/
false
false
https://a.thumbs.redditm…6DYYAentaN_0.jpg
16
null
Why do grad norm sink to 0 (at least I think) randomly during unsloth full finetuning?
2
Need help, I am running a series of full fine-tuning on Llama 2 7B hf with unsloth. For some time, it was working just fine, and then this happened. I didn't notice until after the training was completed. I was sure of the training script because I had previously executed it with a slightly different setting (I modifie...
2025-07-05T12:44:49
https://www.reddit.com/r/LocalLLaMA/comments/1ls91w3/why_do_grad_norm_sink_to_0_at_least_i_think/
Old-Acanthisitta-574
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls91w3
false
null
t3_1ls91w3
/r/LocalLLaMA/comments/1ls91w3/why_do_grad_norm_sink_to_0_at_least_i_think/
false
false
https://a.thumbs.redditm…ikPkjj1sphR8.jpg
2
null
Are there any autoregressive image gen models I can run locally on a 9070 XT/RAM?
2
Title says it all, are there any models that work like gpt image 1 that I can run on an AMD GPU or on RAM?
2025-07-05T12:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1ls8sk9/are_there_any_autoregressive_image_gen_models_i/
jojokingxp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls8sk9
false
null
t3_1ls8sk9
/r/LocalLLaMA/comments/1ls8sk9/are_there_any_autoregressive_image_gen_models_i/
false
false
self
2
null
Aveni Labs releases FinLLM technical report: a 7B domain-specific model for financial services outperforming some frontier LLMs
15
Just read the [FinLLM technical report](https://aveni.ai/wp-content/uploads/2025/05/Aveni-Detect-Combined-Case-Study.pdf) from Aveni Labs. It’s a 7B parameter language model built specifically for UK financial services, trained with regulatory alignment and fine-tuned for tasks like compliance monitoring, adviser QA, a...
2025-07-05T12:04:41
https://www.reddit.com/r/LocalLLaMA/comments/1ls8c2s/aveni_labs_releases_finllm_technical_report_a_7b/
Ok-Cryptographer9361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls8c2s
false
null
t3_1ls8c2s
/r/LocalLLaMA/comments/1ls8c2s/aveni_labs_releases_finllm_technical_report_a_7b/
false
false
self
15
null
Multi GPUs?
3
What's the current state of multi GPU use in local UIs? For example, GPUs such as 2x RX570/580/GTX1060, GTX1650, etc... I ask for future reference of the possibility of having twice VRam amount or an increase since some of these can still be found for half the price of a RTX. In case it's possible, pairing AMD GPU wit...
2025-07-05T11:37:42
https://www.reddit.com/r/LocalLLaMA/comments/1ls7vmb/multi_gpus/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls7vmb
false
null
t3_1ls7vmb
/r/LocalLLaMA/comments/1ls7vmb/multi_gpus/
false
false
self
3
null
local "deep research"?
1
Hi all, Are there open source tools, supporting local inference, that can perform a similar "deep research" function to perplexity's "research" or gemini's "deep research"? I'm getting much better results from the "deep research" tools, compared to the basic ones, so obviously : ) I want to run that kind of search al...
2025-07-05T11:28:44
https://www.reddit.com/r/LocalLLaMA/comments/1ls7qb1/local_deep_research/
ssq12345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls7qb1
false
null
t3_1ls7qb1
/r/LocalLLaMA/comments/1ls7qb1/local_deep_research/
false
false
self
1
{'enabled': False, 'images': [{'id': 'aR8aWIKSGmq3vAJ2oYeFzLYda1yicHgTyKj-StE2ljE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aR8aWIKSGmq3vAJ2oYeFzLYda1yicHgTyKj-StE2ljE.png?width=108&crop=smart&auto=webp&s=71326c2a04f42c25d01d5ef4a0a866f7bbf25ac7', 'width': 108}, {'height': 108, 'url': 'h...
Impact of PCIe 5.0 Bandwidth on GPU Content Creation Performance
57
2025-07-05T10:43:05
https://www.pugetsystems.com/labs/articles/impact-of-pcie-5-0-bandwidth-on-gpu-content-creation-performance/
d5dq
pugetsystems.com
1970-01-01T00:00:00
0
{}
1ls70r2
false
null
t3_1ls70r2
/r/LocalLLaMA/comments/1ls70r2/impact_of_pcie_50_bandwidth_on_gpu_content/
false
false
default
57
{'enabled': False, 'images': [{'id': 'Hmgn73CQ0ArpZT9jmmMJLBLX21JxLwuOBVd0t3yUiJU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Hmgn73CQ0ArpZT9jmmMJLBLX21JxLwuOBVd0t3yUiJU.png?width=108&crop=smart&auto=webp&s=41313f8558078bc1cb1b64de29b33e20187ee0b2', 'width': 108}, {'height': 121, 'url': 'h...
Running GGUF model on iOS with local API
3
I‘m looking for a iOS-App where I can run a local model (e.g. Qwen3-4b) which provides a Ollama like API where I can connect to from other apps. As iPhone 16/iPad are quite fast with promt processing and token generation at such small models and very power efficient, I would like to test some use cases. (If someone k...
2025-07-05T09:45:01
https://www.reddit.com/r/LocalLLaMA/comments/1ls66qt/running_gguf_model_on_ios_with_local_api/
vistalba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls66qt
false
null
t3_1ls66qt
/r/LocalLLaMA/comments/1ls66qt/running_gguf_model_on_ios_with_local_api/
false
false
self
3
null
Asking LLMs data visualized as plots
2
Hi, I'm looking for an app (e.g. LM Studio) + LLM solution that allows me to visualize LLM-generated data. I often ask LLM questions that returns some form of numerical data. For example, I might ask "what's the world's population over time" or "what's the population by country in 2000", which might return me a table ...
2025-07-05T09:43:46
https://www.reddit.com/r/LocalLLaMA/comments/1ls663p/asking_llms_data_visualized_as_plots/
injeolmi-bingsoo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ls663p
false
null
t3_1ls663p
/r/LocalLLaMA/comments/1ls663p/asking_llms_data_visualized_as_plots/
false
false
self
2
{'enabled': False, 'images': [{'id': 'EmPlmkUgK-psVbhlsUSnHxl3YPY4gyS7RTnvxPH48b4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EmPlmkUgK-psVbhlsUSnHxl3YPY4gyS7RTnvxPH48b4.png?width=108&crop=smart&auto=webp&s=83f44c8691982ad244b150d65aff459fb5af56ed', 'width': 108}, {'height': 108, 'url': 'h...