title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
🎁 Free Toolkit: 5 Python Scripts to Automate Real-Life Tasks (Sort Files, Merge PDFs, Scrape News & More)
1
[removed]
2025-07-22T01:48:20
https://www.reddit.com/r/LocalLLaMA/comments/1m61ie4/free_toolkit_5_python_scripts_to_automate/
Itchy-Warning1127
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m61ie4
false
null
t3_1m61ie4
/r/LocalLLaMA/comments/1m61ie4/free_toolkit_5_python_scripts_to_automate/
false
false
self
1
{'enabled': False, 'images': [{'id': '6VsFbGz_uu_dzqEwrg1btm1FbkxcvjRcwh5M2xY3m-s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6VsFbGz_uu_dzqEwrg1btm1FbkxcvjRcwh5M2xY3m-s.png?width=108&crop=smart&auto=webp&s=d51535a7f74acb05a9a24f1fcaabe18b3a1d1b98', 'width': 108}, {'height': 121, 'url': 'h...
New qwen tested on Fiction.liveBench
98
2025-07-22T01:33:20
https://i.redd.it/9rynne03xbef1.png
fictionlive
i.redd.it
1970-01-01T00:00:00
0
{}
1m6172l
false
null
t3_1m6172l
/r/LocalLLaMA/comments/1m6172l/new_qwen_tested_on_fictionlivebench/
false
false
default
98
{'enabled': True, 'images': [{'id': '9rynne03xbef1', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/9rynne03xbef1.png?width=108&crop=smart&auto=webp&s=7c72f32848579ff8381bbc07e00d52af73ccb790', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/9rynne03xbef1.png?width=216&crop=smart&auto=we...
Used A100 40GB just dropped below $2000, for those who care with caveat
105
Unfortunately it's on SXM4, you will need a $600 adapter for this. but I am sure someone with enough motivation will figure out a way to drop it into a PCIe adapter to sell it as a complete package. It'll be an interesting piece of localllama HW.
2025-07-22T00:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1m60ahf/used_a100_40gb_just_dropped_below_2000_for_those/
--dany--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m60ahf
false
null
t3_1m60ahf
/r/LocalLLaMA/comments/1m60ahf/used_a100_40gb_just_dropped_below_2000_for_those/
false
false
self
105
null
Silly Tavern - MythoMax
1
[removed]
2025-07-22T00:43:46
https://www.reddit.com/r/LocalLLaMA/comments/1m605op/silly_tavern_mythomax/
That_Bowl8237
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m605op
false
null
t3_1m605op
/r/LocalLLaMA/comments/1m605op/silly_tavern_mythomax/
false
false
nsfw
1
null
We asked Qwen3-235B-A22-Instruct-2507 for advice on how best to quantize itself to 4-bits for vLLM. Anyone who understands these things care to comment on its recommendations?
0
The first thing we noticed is that the size estimates in Qwen's answer are incorrect: a 4-bit GPTQ will obviously not be 59GB in size. For reference, the 4-bit w4a16 quant of 235B we are currently testing consumes 88GB VRAM per GPU. Thus we are suspicious of the rest of Qwen's answer, but lack the domain-specific exp...
2025-07-21T23:34:45
https://www.reddit.com/r/LocalLLaMA/comments/1m5ynit/we_asked_qwen3235ba22instruct2507_for_advice_on/
blackwell_tart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ynit
false
null
t3_1m5ynit
/r/LocalLLaMA/comments/1m5ynit/we_asked_qwen3235ba22instruct2507_for_advice_on/
false
false
self
0
null
The Observer Desktop App is Here! + Discord/Pushover Notifications!!
30
TL;DR: This is a **massive** step forward for first-time users. You can now get everything up and running with a single .exe or .dmg download—no command line or Docker needed. It's never been easier to start building your own local, privacy-first screen-watching agents! Hey r/LocalLLaMA !! I am suuuper excited to ...
2025-07-21T23:18:15
https://www.reddit.com/r/LocalLLaMA/comments/1m5y9wj/the_observer_desktop_app_is_here_discordpushover/
Roy3838
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5y9wj
false
null
t3_1m5y9wj
/r/LocalLLaMA/comments/1m5y9wj/the_observer_desktop_app_is_here_discordpushover/
false
false
self
30
{'enabled': False, 'images': [{'id': '-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=108&crop=smart&auto=webp&s=2763f5b07d8000852738cc8bbf6420bc7a793d3e', 'width': 108}, {'height': 121, 'url': 'h...
What is the top model for coding?
0
Been using mostly Claude Code, works great. Yet feels like Im starting to hit the limits of what it can do. Im wondering what others are using for coding? Last time I checked Gemini 2.5 Pro and o3 and o4, they did not felt on par with Claude, maybe things changed recently?
2025-07-21T22:24:35
https://www.reddit.com/r/LocalLLaMA/comments/1m5x04m/what_is_the_top_model_for_coding/
estebansaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5x04m
false
null
t3_1m5x04m
/r/LocalLLaMA/comments/1m5x04m/what_is_the_top_model_for_coding/
false
false
self
0
null
What can I run on my 5090?
0
Hi :) I'm a little concerned about the potential foolishness of feeding forever remembering cloud AIs with my thoughts every day, even if I don't say anything very personal or sensitive. I have an rtx 5090 (32 gb) What are the best local models I can run? Thanks
2025-07-21T21:54:05
https://www.reddit.com/r/LocalLLaMA/comments/1m5w8yl/what_can_i_run_on_my_5090/
hurfery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5w8yl
false
null
t3_1m5w8yl
/r/LocalLLaMA/comments/1m5w8yl/what_can_i_run_on_my_5090/
false
false
self
0
null
How long until we have an alternative to LM studio that's just as good?
0
I really like LM studio. I think it's a pretty outstanding application that really works well for most basic purposes for a local model. User interface is pretty great, out of the box you can just download models and use it quite effectively. I liked that it was easier to set up than ollama and having to have some comm...
2025-07-21T21:48:00
https://www.reddit.com/r/LocalLLaMA/comments/1m5w3kj/how_long_until_we_have_an_alternative_to_lm/
datascientist2964
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5w3kj
false
null
t3_1m5w3kj
/r/LocalLLaMA/comments/1m5w3kj/how_long_until_we_have_an_alternative_to_lm/
false
false
self
0
null
Now that Google and openai have both announced gold at the IMO 2025, how long until an open source model can match that?
4
I really want it to be this year.
2025-07-21T21:42:32
https://www.reddit.com/r/LocalLLaMA/comments/1m5vyku/now_that_google_and_openai_have_both_announced/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5vyku
false
null
t3_1m5vyku
/r/LocalLLaMA/comments/1m5vyku/now_that_google_and_openai_have_both_announced/
false
false
self
4
null
Now that Google and openai have both announced good at the IMO 2025, how long until an open source model can match that?
1
[deleted]
2025-07-21T21:41:59
[deleted]
1970-01-01T00:00:00
0
{}
1m5vy3g
false
null
t3_1m5vy3g
/r/LocalLLaMA/comments/1m5vy3g/now_that_google_and_openai_have_both_announced/
false
false
default
1
null
Deepseek NEVER forgets (help please)
0
I vibe coded a python script to run LlaVa locally. I have very little idea what I am doing and stored my conversion history in a temporary json file. I had some problems and ended up deleting the file and switched to Deepseek-r1. The problem? Deepseek somehow has access to one of my conversation history files and now,...
2025-07-21T21:15:45
https://www.reddit.com/r/LocalLLaMA/comments/1m5va3j/deepseek_never_forgets_help_please/
ScopedFlipFlop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5va3j
false
null
t3_1m5va3j
/r/LocalLLaMA/comments/1m5va3j/deepseek_never_forgets_help_please/
false
false
self
0
null
Im trying to make my own agent with openhands but I keep running into the same error.
0
\*I'm mainly using ChatGPT for this so please try to ignore the fact that I don't understand muc.h\* Hi, I've been trying to build my own AI agent on my pc for the past day now. I keep running into the same error. every time I try to send a message, I get "BadRequestError: litlellm.BadRequestError: GetLLMProviderExcept...
2025-07-21T21:12:58
https://www.reddit.com/r/LocalLLaMA/comments/1m5v7if/im_trying_to_make_my_own_agent_with_openhands_but/
HowdyCapybara
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5v7if
false
null
t3_1m5v7if
/r/LocalLLaMA/comments/1m5v7if/im_trying_to_make_my_own_agent_with_openhands_but/
false
false
self
0
null
Strong case for a 512GB Mac Studio?
0
I'd like to run models locally (at my workplaces) and also refine models, and fortunately I'm not paying! I plan to get a Mac Studio with 80 core GPU and 256GB RAM. Is there any strong case that I'm missing for going with 512GB RAM?
2025-07-21T20:58:49
https://www.reddit.com/r/LocalLLaMA/comments/1m5uu0t/strong_case_for_a_512gb_mac_studio/
ChevChance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5uu0t
false
null
t3_1m5uu0t
/r/LocalLLaMA/comments/1m5uu0t/strong_case_for_a_512gb_mac_studio/
false
false
self
0
null
[Project Share] Built a 4K Instruction Dataset Based on SEC 6-K/8-K Filings (JSONL format, QLoRA-friendly)
0
Hey everyone, I recently wrapped up a side project involving SEC filings, and thought some of you here might find it interesting or useful. I built a dataset of ~4,000 instruction-output samples based on real 6-K and 8-K filings. It’s structured in JSONL, QLoRA/Alpaca-style format (natural language instruction → clean...
2025-07-21T20:45:43
https://www.reddit.com/r/LocalLLaMA/comments/1m5uhwc/project_share_built_a_4k_instruction_dataset/
Xairossss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5uhwc
false
null
t3_1m5uhwc
/r/LocalLLaMA/comments/1m5uhwc/project_share_built_a_4k_instruction_dataset/
false
false
self
0
null
Custom 55M parameter LLM 5 epochs in from scratch on 625kb data set
1
[removed]
2025-07-21T20:14:52
https://i.redd.it/oan2t631caef1.png
Weak-Concern8915
i.redd.it
1970-01-01T00:00:00
0
{}
1m5to6v
false
null
t3_1m5to6v
/r/LocalLLaMA/comments/1m5to6v/custom_55m_parameter_llm_5_epochs_in_from_scratch/
false
false
default
1
{'enabled': True, 'images': [{'id': 'oan2t631caef1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/oan2t631caef1.png?width=108&crop=smart&auto=webp&s=b360a6fdf1a2cb935f8e12c59d7486f5f610fded', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/oan2t631caef1.png?width=216&crop=smart&auto=web...
625KB data set training from scratch with custom 55M parameter model
1
[removed]
2025-07-21T20:11:41
https://i.redd.it/6gi01oxfbaef1.png
Weak-Concern8915
i.redd.it
1970-01-01T00:00:00
0
{}
1m5tl3g
false
null
t3_1m5tl3g
/r/LocalLLaMA/comments/1m5tl3g/625kb_data_set_training_from_scratch_with_custom/
false
false
default
1
{'enabled': True, 'images': [{'id': '6gi01oxfbaef1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/6gi01oxfbaef1.png?width=108&crop=smart&auto=webp&s=d599b540e011a2b1f5485b904f47e850b4a492fe', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/6gi01oxfbaef1.png?width=216&crop=smart&auto=web...
Why are base non-finetuned models so bad?
0
I know that most platforms fine-tune their models and use a good system prompt, but I've tried Qwen3 32B locally and on [qwen.com](http://qwen.com) and the difference is so huge. Are there publicly available ready fine-tunes and system prompts I can use to improve the models locally?
2025-07-21T20:07:37
https://www.reddit.com/r/LocalLLaMA/comments/1m5th6s/why_are_base_nonfinetuned_models_so_bad/
ThatIsNotIllegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5th6s
false
null
t3_1m5th6s
/r/LocalLLaMA/comments/1m5th6s/why_are_base_nonfinetuned_models_so_bad/
false
false
self
0
null
Why did Ollama stop shipping new models?
8
I'm suprised in the fast paced world of ai runners and engines that ollama has let off the gas like this. Anyone have insight? Llama.cpp vllm are still rapidly releasing support for new models but maybe funding is slowing down for ai related oss startups? The pace of new models has slowed but does not fully account for...
2025-07-21T19:31:43
https://www.reddit.com/r/LocalLLaMA/comments/1m5sj3h/why_did_ollama_stop_shipping_new_models/
Huge-Safety-1061
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5sj3h
false
null
t3_1m5sj3h
/r/LocalLLaMA/comments/1m5sj3h/why_did_ollama_stop_shipping_new_models/
false
false
self
8
null
AI 395+ 64GB vs 128GB?
30
Looking at getting this machine for running local llms. New to running them locally. Wondering if 128GB is worth it, or if the larger models start becoming too slow to make the extra memory meaningful? I would love to hear some opinions.
2025-07-21T19:18:31
https://www.reddit.com/r/LocalLLaMA/comments/1m5s6d1/ai_395_64gb_vs_128gb/
cfogrady
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5s6d1
false
null
t3_1m5s6d1
/r/LocalLLaMA/comments/1m5s6d1/ai_395_64gb_vs_128gb/
false
false
self
30
null
RTX 5090 (32GB VRAM) - Full Fine-Tuning: What Can I Expect?
8
Hey r/LocalLLaMA, Just got an RTX 5090 with 32GB of VRAM and I'm looking to get into full fine-tuning LLMs locally. My main question is about the full fine-tuning capabilities with this GPU. I know 32GB is a lot, but full fine-tuning can be a VRAM hog. * What's the realistic largest model size (in billions of paramet...
2025-07-21T18:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1m5ro7s/rtx_5090_32gb_vram_full_finetuning_what_can_i/
celsowm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ro7s
false
null
t3_1m5ro7s
/r/LocalLLaMA/comments/1m5ro7s/rtx_5090_32gb_vram_full_finetuning_what_can_i/
false
false
self
8
null
OpenAI's IMO 'Gold Medal' is a Marketing Stunt, Not a Scientific Breakthrough.
0
Hey r/LocalLLaMA, Like many of you, I've seen the headlines and the hype around OpenAI's claim of achieving a "gold medal" at the IMO. They're positioning this as a monumental leap towards AGI. But when you look past the slick PR, the entire claim crumbles under the slightest scrutiny. This isn't a story of a scienti...
2025-07-21T18:50:11
https://www.reddit.com/r/LocalLLaMA/comments/1m5rext/openais_imo_gold_medal_is_a_marketing_stunt_not_a/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5rext
false
null
t3_1m5rext
/r/LocalLLaMA/comments/1m5rext/openais_imo_gold_medal_is_a_marketing_stunt_not_a/
false
false
self
0
null
Exhausted man defeats AI model in world coding championship
145
_A Polish programmer running on fumes recently accomplished what may soon become impossible: beating an advanced AI model from OpenAI in a head-to-head coding competition. The 10-hour marathon left him "completely exhausted."_ https://arstechnica.com/ai/2025/07/exhausted-man-defeats-ai-model-in-world-coding-championsh...
2025-07-21T18:45:00
https://www.reddit.com/r/LocalLLaMA/comments/1m5r9ss/exhausted_man_defeats_ai_model_in_world_coding/
Educational_Sun_8813
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5r9ss
false
null
t3_1m5r9ss
/r/LocalLLaMA/comments/1m5r9ss/exhausted_man_defeats_ai_model_in_world_coding/
false
false
self
145
{'enabled': False, 'images': [{'id': 'blOzOsTs-z21YUWC-XZkszWY0Ligsy1VCK1fZxml6qo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/blOzOsTs-z21YUWC-XZkszWY0Ligsy1VCK1fZxml6qo.jpeg?width=108&crop=smart&auto=webp&s=c56d051b4b4e63e5c6627f8639b6bc541ebe7a70', 'width': 108}, {'height': 121, 'url': '...
test
0
test
2025-07-21T18:34:02
https://www.reddit.com/r/LocalLLaMA/comments/1m5qzgs/test/
BroQuant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5qzgs
false
null
t3_1m5qzgs
/r/LocalLLaMA/comments/1m5qzgs/test/
false
false
self
0
null
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
6
Project Page: [CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning](https://deepreinforce-ai.github.io/cudal1_blog/) Code: [GitHub - deepreinforce-ai/CUDA-L1](https://github.com/deepreinforce-ai/CUDA-L1) Abstract >The exponential growth in demand for GPU computing resources, driven by the rap...
2025-07-21T18:27:08
https://arxiv.org/abs/2507.14111
Formal_Drop526
arxiv.org
1970-01-01T00:00:00
0
{}
1m5qsqx
false
null
t3_1m5qsqx
/r/LocalLLaMA/comments/1m5qsqx/cudal1_improving_cuda_optimization_via/
false
false
default
6
null
How much raw power do I need to use mistral's devstral 2507 Q4_K_M?
0
I'm wondering if I have what it takes to run this: **mistralai/devstral-small-2507 Q4\_K\_M - 24b** Here is my PC's hardware specs: --- PC Hardware Specifications --- Graphics Card: Name: NVIDIA GeForce RTX 5070 Ti VRAM: 4 GB Driver Version: 32.0.15.7688 ...
2025-07-21T18:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1m5qr6y/how_much_raw_power_do_i_need_to_use_mistrals/
datascientist2964
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5qr6y
false
null
t3_1m5qr6y
/r/LocalLLaMA/comments/1m5qr6y/how_much_raw_power_do_i_need_to_use_mistrals/
false
false
self
0
null
Qwen3 insane SimpleQA
79
Why is no one talking about the insane simpleQA score for the new Qwen3 model? 54.3 OMG! How are they doijg this with a 235ba22b model?!
2025-07-21T18:21:10
https://www.reddit.com/r/LocalLLaMA/comments/1m5qn1n/qwen3_insane_simpleqa/
gzzhongqi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5qn1n
false
null
t3_1m5qn1n
/r/LocalLLaMA/comments/1m5qn1n/qwen3_insane_simpleqa/
false
false
self
79
null
Using ollama and claude to control Neu
3
Here is a brief demo showing how one could use the new AI chat features in Neu called the magic hand. This system uses Llama 3.2 3b as a tool caller, and Claude Haiku 3.5 to generate the code but the code step could easily be replaced with a local model such as Qwen 3. I'm most using Claude because of the speed. It's s...
2025-07-21T18:13:34
https://v.redd.it/lgjdwe0yo9ef1
kingroka
v.redd.it
1970-01-01T00:00:00
0
{}
1m5qflo
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lgjdwe0yo9ef1/DASHPlaylist.mpd?a=1755713631%2CZTNkZGMwMjhlNTZmZjZkM2EwYmY3MWJjNjFjNjJlMjg4YmM3ZjI5NDMxMzUzMTNiZmMyMThmOWU1MjMxYjVhYQ%3D%3D&v=1&f=sd', 'duration': 74, 'fallback_url': 'https://v.redd.it/lgjdwe0yo9ef1/DASH_1080.mp4?source=fallback', 'h...
t3_1m5qflo
/r/LocalLLaMA/comments/1m5qflo/using_ollama_and_claude_to_control_neu/
false
false
https://external-preview…bd745b1b393466e0
3
{'enabled': False, 'images': [{'id': 'MGFjdHZlMHlvOWVmMfKUvQGyK-hk_9kIm7eyVblVWB99wibqoL0okUWLtxg2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGFjdHZlMHlvOWVmMfKUvQGyK-hk_9kIm7eyVblVWB99wibqoL0okUWLtxg2.png?width=108&crop=smart&format=pjpg&auto=webp&s=70d833416067508a44b90607113d94349d557...
Interesting new blog post from Lemonade team
19
[https://www.amd.com/en/developer/resources/technical-articles/2025/rethinking-local-ai-lemonade-servers-python-advantage.html](https://www.amd.com/en/developer/resources/technical-articles/2025/rethinking-local-ai-lemonade-servers-python-advantage.html)
2025-07-21T18:00:44
https://www.reddit.com/r/LocalLLaMA/comments/1m5q35o/interesting_new_blog_post_from_lemonade_team/
Smooth-Screen4148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5q35o
false
null
t3_1m5q35o
/r/LocalLLaMA/comments/1m5q35o/interesting_new_blog_post_from_lemonade_team/
false
false
self
19
{'enabled': False, 'images': [{'id': 'tXAQH-2IuxJHSCb05V5OiFQN-j9xlst_M-d3k_TkoOc', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/tXAQH-2IuxJHSCb05V5OiFQN-j9xlst_M-d3k_TkoOc.png?width=108&crop=smart&auto=webp&s=0d6ab69b2057c1ff74bd4fad9be640864917203c', 'width': 108}, {'height': 97, 'url': 'ht...
Before & after: redesigned the character catalog UI. What do you think?
3
Hey r/LocalLLaMA, Last week, I shared some initial drafts of my platform's UI. Thanks to the amazing work of a designer friend, I'm back to show you the evolution from that first AI-generated concept to a mostly polished, human-crafted interface (still candidate, tho). As you can see, the difference is night and day!...
2025-07-21T17:50:03
https://www.reddit.com/gallery/1m5psqj
RIPT1D3_Z
reddit.com
1970-01-01T00:00:00
0
{}
1m5psqj
false
null
t3_1m5psqj
/r/LocalLLaMA/comments/1m5psqj/before_after_redesigned_the_character_catalog_ui/
false
false
https://b.thumbs.redditm…TPsvnn4bZncg.jpg
3
null
Looking to possibly replace my ChatGPT subscription with running a local LLM. What local models match/rival 4o?
0
I’m currently using ChatGPT 4o, and I’d like to explore the possibility of running a local LLM on my home server. I know VRAM is a really big factor and I’m considering purchasing two RTX 3090s for running a local LLM. What models would compete with GPT 4o?
2025-07-21T17:43:47
https://www.reddit.com/r/LocalLLaMA/comments/1m5pmox/looking_to_possibly_replace_my_chatgpt/
ActuallyGeyzer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5pmox
false
null
t3_1m5pmox
/r/LocalLLaMA/comments/1m5pmox/looking_to_possibly_replace_my_chatgpt/
false
false
self
0
null
Why not build instruct models that give you straight answers with no positivity bias and no bs?
0
I have been wondering this for a while now - why is nobody building custom instruct versions from public base models that don't include the typical sycophantic behavior of official releases where every dumb idea the user has is just SO insightful? The most I see is some RP specific tunes, but for more general purpose a...
2025-07-21T17:39:30
https://www.reddit.com/r/LocalLLaMA/comments/1m5pig4/why_not_build_instruct_models_that_give_you/
LagOps91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5pig4
false
null
t3_1m5pig4
/r/LocalLLaMA/comments/1m5pig4/why_not_build_instruct_models_that_give_you/
false
false
self
0
null
RTX 5090 not recognized on Ubuntu — anyone else figure this out?
4
Trying to get an RTX 5090 working on Ubuntu and hitting a wall. The system boots fine, BIOS sees the card, but Ubuntu doesn’t seem to know it exists. nvidia-smi comes up empty. Meanwhile, a 4090 in the same machine is working just fine. Here’s what I’ve tried so far: * Installed latest NVIDIA drivers from both apt an...
2025-07-21T17:32:57
https://www.reddit.com/r/LocalLLaMA/comments/1m5pbxo/rtx_5090_not_recognized_on_ubuntu_anyone_else/
ate50eggs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5pbxo
false
null
t3_1m5pbxo
/r/LocalLLaMA/comments/1m5pbxo/rtx_5090_not_recognized_on_ubuntu_anyone_else/
false
false
self
4
null
Qwen/Qwen3-235B-A22B-Instruct-2507 · Hugging Face
79
2025-07-21T17:32:34
https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507
random-tomato
huggingface.co
1970-01-01T00:00:00
0
{}
1m5pbj0
false
null
t3_1m5pbj0
/r/LocalLLaMA/comments/1m5pbj0/qwenqwen3235ba22binstruct2507_hugging_face/
false
false
default
79
{'enabled': False, 'images': [{'id': 'XyDac6TnV0yjdA-C8ojiXDTxH6tgY_Cc33jnLmPWJ8g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XyDac6TnV0yjdA-C8ojiXDTxH6tgY_Cc33jnLmPWJ8g.png?width=108&crop=smart&auto=webp&s=41efd73e0b1f2f6245cc18321de9593d2f691f2a', 'width': 108}, {'height': 116, 'url': 'h...
Do not sleep on ERNIE-4.5-300B-A47B especially if you can't Kimi K2
71
Kimi K2 is a beast! Both in performance and to run. Ernie is much smaller and easier to run. It's 47B active, so going to be a bit slower, however it performs quite well. I would call it K2's little brother, I think it got overshadowed by K2 especially since K2 was the claude sonnet 4 and open weight OpenAI killer...
2025-07-21T17:27:10
https://www.reddit.com/r/LocalLLaMA/comments/1m5p69p/do_not_sleep_on_ernie45300ba47b_especially_if_you/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5p69p
false
null
t3_1m5p69p
/r/LocalLLaMA/comments/1m5p69p/do_not_sleep_on_ernie45300ba47b_especially_if_you/
false
false
self
71
null
Gemini with Deep Think achieves Gold Medal at International Math Olympiad
1
2025-07-21T17:24:20
https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
VR-Person
deepmind.google
1970-01-01T00:00:00
0
{}
1m5p3g8
false
null
t3_1m5p3g8
/r/LocalLLaMA/comments/1m5p3g8/gemini_with_deep_think_achieves_gold_medal_at/
false
false
default
1
null
Qwen3-235B-A22B-2507!
160
[let's go](https://preview.redd.it/7by2astxg9ef1.png?width=1920&format=png&auto=webp&s=ed2caaa4b854693b6fd46383a9626aefe87b0128)
2025-07-21T17:19:58
https://www.reddit.com/r/LocalLLaMA/comments/1m5oz0h/qwen3235ba22b2507/
ken-senseii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5oz0h
false
null
t3_1m5oz0h
/r/LocalLLaMA/comments/1m5oz0h/qwen3235ba22b2507/
false
false
https://b.thumbs.redditm…GF-BtGXWSrIM.jpg
160
null
Qwen/Qwen3-235B-A22B-Instruct-2507 · Hugging Face
51
2025-07-21T17:19:22
https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1m5oyf5
false
null
t3_1m5oyf5
/r/LocalLLaMA/comments/1m5oyf5/qwenqwen3235ba22binstruct2507_hugging_face/
false
false
default
51
{'enabled': False, 'images': [{'id': 'XyDac6TnV0yjdA-C8ojiXDTxH6tgY_Cc33jnLmPWJ8g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XyDac6TnV0yjdA-C8ojiXDTxH6tgY_Cc33jnLmPWJ8g.png?width=108&crop=smart&auto=webp&s=41efd73e0b1f2f6245cc18321de9593d2f691f2a', 'width': 108}, {'height': 116, 'url': 'h...
Qwen released Qwen3-235B-A22B-2507!
135
Bye Qwen3-235B-A22B, hello Qwen3-235B-A22B-2507! After talking with the community and thinking it through, we decided to stop using hybrid thinking mode. Instead, we’ll train Instruct and Thinking models separately so we can get the best quality possible. Today, we’re releasing Qwen3-235B-A22B-Instruct-2507 and its FP...
2025-07-21T17:18:57
https://i.redd.it/6csu4o4wg9ef1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1m5oxyp
false
null
t3_1m5oxyp
/r/LocalLLaMA/comments/1m5oxyp/qwen_released_qwen3235ba22b2507/
false
false
default
135
{'enabled': True, 'images': [{'id': '6csu4o4wg9ef1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6csu4o4wg9ef1.jpeg?width=108&crop=smart&auto=webp&s=6baea679e548efcaaac74cffb282ff70f159dd23', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/6csu4o4wg9ef1.jpeg?width=216&crop=smart&auto=w...
Qwen3-235B-A22B-2507
515
https://x.com/Alibaba_Qwen/status/1947344511988076547 New Qwen3-235B-A22B with thinking mode only –– no more hybrid reasoning.
2025-07-21T17:18:14
https://i.redd.it/w2uh7h5lg9ef1.png
Mysterious_Finish543
i.redd.it
1970-01-01T00:00:00
0
{}
1m5ox8z
false
null
t3_1m5ox8z
/r/LocalLLaMA/comments/1m5ox8z/qwen3235ba22b2507/
false
false
default
515
{'enabled': True, 'images': [{'id': 'w2uh7h5lg9ef1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/w2uh7h5lg9ef1.png?width=108&crop=smart&auto=webp&s=32d1f0ad8ac85f1518bb4e197d86320d03376d96', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/w2uh7h5lg9ef1.png?width=216&crop=smart&auto=web...
Meet the Agent: The Brain Behind Gemini CLI
0
Any Gemini CLI experts here? Does this article make sense to you? \--- [**Meet the Agent: The Brain Behind Gemini CLI**](https://softwaresecretweapons.com/opinion/gemini-cli-masterclass/gemini-cli-agents.html) In this article, we explore the "mind" behind Gemini CLI, showing how this LLM-powered agent uses a methodi...
2025-07-21T17:17:48
https://www.reddit.com/r/LocalLLaMA/comments/1m5owur/meet_the_agent_the_brain_behind_gemini_cli/
SandFragrant6227
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5owur
false
null
t3_1m5owur
/r/LocalLLaMA/comments/1m5owur/meet_the_agent_the_brain_behind_gemini_cli/
false
false
self
0
null
Qwen3-235B-A22B-2507 Released!
819
2025-07-21T17:17:27
https://x.com/Alibaba_Qwen/status/1947344511988076547
pseudoreddituser
x.com
1970-01-01T00:00:00
0
{}
1m5owi8
false
null
t3_1m5owi8
/r/LocalLLaMA/comments/1m5owi8/qwen3235ba22b2507_released/
false
false
default
819
{'enabled': False, 'images': [{'id': 'DZ_J_yAfR8TLjLmR0s6ZMb4IqBdDowTQUhHZ335Z0r8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0p_T6A15tLk4WquPAys3oZ34c-lcssBt_NcX5stv-2M.jpg?width=108&crop=smart&auto=webp&s=4c0c862769016dce18130a1fb791dbf78757f922', 'width': 108}, {'height': 121, 'url': 'h...
Facing some problems with Docling parser
1
Hi guys, I had created a rag application but i made it for documents of PDF format only. I use PyMuPDF4llm to parse the PDF. But now I want to add the option for all the document formats, i.e, pptx, xlsx, csv, docx, and the image formats. I tried docling for this, since PyMuPDF4llm requires subscription to allow r...
2025-07-21T17:12:16
https://www.reddit.com/r/LocalLLaMA/comments/1m5or7n/facing_some_problems_with_docling_parser/
ElectronicHoneydew86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5or7n
false
null
t3_1m5or7n
/r/LocalLLaMA/comments/1m5or7n/facing_some_problems_with_docling_parser/
false
false
self
1
null
Running vllm on Nvidia 5090
2
Hi everyone, I'm trying to run vllm on my nvidia 5090, possibly in a dockerized container. Before I start looking into this, has anyone already done this or has a good docker image to suggest that works out-of-the-box? If not, any tips? Thank you!!
2025-07-21T17:05:49
https://www.reddit.com/r/LocalLLaMA/comments/1m5okz7/running_vllm_on_nvidia_5090/
Reasonable_Friend_77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5okz7
false
null
t3_1m5okz7
/r/LocalLLaMA/comments/1m5okz7/running_vllm_on_nvidia_5090/
false
false
self
2
null
I messed up my brother's Llama AI workstation.. looking for advice
3
I told my brother I can help him build an AI workstation since he wants to run Llama 3.1 locally and train it or build a RAG or whatever. Since he's a software guy and I'm a gamer who built 2 gaming PCs in my entire life, he agreed to trust me with picking the parts and putting everything together (I was shocked too). ...
2025-07-21T17:04:47
https://www.reddit.com/r/LocalLLaMA/comments/1m5ojym/i_messed_up_my_brothers_llama_ai_workstation/
spherical-aspiration
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ojym
false
null
t3_1m5ojym
/r/LocalLLaMA/comments/1m5ojym/i_messed_up_my_brothers_llama_ai_workstation/
false
false
self
3
null
I want to start with local AI
0
I recently started thinking about using local AI, but I don't know where to start, what I need, or if I can afford it. So I wanted to ask a few questions. 1. What do I need at a minimum to use a local AI? 2. Where can I find it to download? 3. What do I need to know before I start? 4. What really changes from one mode...
2025-07-21T16:37:48
https://www.reddit.com/r/LocalLLaMA/comments/1m5nt6s/i_want_to_start_with_local_ai/
Then-History2046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5nt6s
false
null
t3_1m5nt6s
/r/LocalLLaMA/comments/1m5nt6s/i_want_to_start_with_local_ai/
false
false
self
0
null
I messed up my brother's AI workstation.. looking for advice
1
[removed]
2025-07-21T16:32:01
https://www.reddit.com/r/LocalLLaMA/comments/1m5nnq3/i_messed_up_my_brothers_ai_workstation_looking/
spherical-aspiration
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5nnq3
false
null
t3_1m5nnq3
/r/LocalLLaMA/comments/1m5nnq3/i_messed_up_my_brothers_ai_workstation_looking/
false
false
self
1
null
I messed up my brother's AI workstation.. looking for advice
1
[removed]
2025-07-21T16:28:32
https://www.reddit.com/r/LocalLLaMA/comments/1m5nkdd/i_messed_up_my_brothers_ai_workstation_looking/
spherical-aspiration
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5nkdd
false
null
t3_1m5nkdd
/r/LocalLLaMA/comments/1m5nkdd/i_messed_up_my_brothers_ai_workstation_looking/
false
false
https://b.thumbs.redditm…BW2tLFKIi33Y.jpg
1
null
I messed up my brother's AI workstation.. please help
1
[removed]
2025-07-21T16:25:32
https://www.reddit.com/r/LocalLLaMA/comments/1m5nhjv/i_messed_up_my_brothers_ai_workstation_please_help/
spherical-aspiration
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5nhjv
false
null
t3_1m5nhjv
/r/LocalLLaMA/comments/1m5nhjv/i_messed_up_my_brothers_ai_workstation_please_help/
false
false
https://b.thumbs.redditm…tk_wIDxliD2I.jpg
1
null
Heavily promoting the dishwashing benchmark
12
Heavily promoting the dishwashing benchmark: Gemini 3.0 Ultra score: 0% GPT 5 Pro score: 0% Claude 5 Opus score: 0% grok 5 score:0% DeepSeek R2 score: 0% Qwen4 Max score: 0% Kimi K3 score: 0%
2025-07-21T16:21:34
https://www.reddit.com/r/LocalLLaMA/comments/1m5ndsf/heavily_promoting_the_dishwashing_benchmark/
gtog-ima
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ndsf
false
null
t3_1m5ndsf
/r/LocalLLaMA/comments/1m5ndsf/heavily_promoting_the_dishwashing_benchmark/
false
false
self
12
null
EU is being left behinde and it sucks!
33
Been seeing loads of developers here going on about how LLM integraded IDE's like Windsurf and Cursor totally changed their coding. Of course, I was interested and wanted to give it a go. Spoke to work about it, and the boss just said "no way dude" GDPR-compliant and PII could be garanted (we are a bigger team, includi...
2025-07-21T16:14:06
https://www.reddit.com/r/LocalLLaMA/comments/1m5n6lq/eu_is_being_left_behinde_and_it_sucks/
No-Refrigerator9508
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5n6lq
false
null
t3_1m5n6lq
/r/LocalLLaMA/comments/1m5n6lq/eu_is_being_left_behinde_and_it_sucks/
false
false
self
33
null
Imminent release from Qwen tonight
442
https://x.com/JustinLin610/status/1947281769134170147 Maybe Qwen3-Coder, Qwen3-VL or a new QwQ? Will be open source / weight according to Chujie Zheng [here](https://x.com/ChujieZheng/status/1947307034980089905).
2025-07-21T16:08:22
https://i.redd.it/um0pwye549ef1.png
Mysterious_Finish543
i.redd.it
1970-01-01T00:00:00
0
{}
1m5n148
false
null
t3_1m5n148
/r/LocalLLaMA/comments/1m5n148/imminent_release_from_qwen_tonight/
false
false
default
442
{'enabled': True, 'images': [{'id': 'um0pwye549ef1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/um0pwye549ef1.png?width=108&crop=smart&auto=webp&s=ac602ae1dcb08fc594a97f2c504da7e053543395', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/um0pwye549ef1.png?width=216&crop=smart&auto=web...
DMOSpeech 2: 2x faster + higher-quality F5-TTS from the author of StyleTTS 2
51
The author is StyleTTS 2 just released DMOSpeech2 - post-trained F5-TTS that’s 2x faster with improved WER and stability. Looks very interesting and open sourced with training code coming soon. This is probably the last open source project we will see from the author for a while, but looks very very interesting.
2025-07-21T16:07:08
https://github.com/yl4579/DMOSpeech2
mrfakename0
github.com
1970-01-01T00:00:00
0
{}
1m5mzxt
false
null
t3_1m5mzxt
/r/LocalLLaMA/comments/1m5mzxt/dmospeech_2_2x_faster_higherquality_f5tts_from/
false
false
default
51
{'enabled': False, 'images': [{'id': 'QXB2u8-1CJvvYiUKNzbSUTq0ZcDZjw_7UaAxuMzOB74', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QXB2u8-1CJvvYiUKNzbSUTq0ZcDZjw_7UaAxuMzOB74.png?width=108&crop=smart&auto=webp&s=55a782ebf4637087ab602e003b76d529f0b2b9b0', 'width': 108}, {'height': 108, 'url': 'h...
Incoming Qwen release
1
Maybe Qwen3-Coder, Qwen3-VL or a new QwQ?
2025-07-21T16:06:07
https://x.com/JustinLin610/status/1947281769134170147
Mysterious_Finish543
x.com
1970-01-01T00:00:00
0
{}
1m5myzb
false
null
t3_1m5myzb
/r/LocalLLaMA/comments/1m5myzb/incoming_qwen_release/
false
false
default
1
null
Help with choosing model to create bot that will talk like me.
1
Hello. I don't know much about LLM's, but I'd like to create a bot that tries to behave like me. I have around 3 years of my scrapped messages from various platforms. The idea is to teach a model with my dataset (messages) so it tries to understand how I behave, how I text and what words I use and then run a Discord bo...
2025-07-21T15:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1m5msg4/help_with_choosing_model_to_create_bot_that_will/
deadyasiu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5msg4
false
null
t3_1m5msg4
/r/LocalLLaMA/comments/1m5msg4/help_with_choosing_model_to_create_bot_that_will/
false
false
self
1
null
Help with choosing model to create bot that will talk like me.
2
Hello. I don't know much about LLM's, but I'd like to create a bot that tries to behave like me. I have around 3 years of my scrapped messages from various platforms. The idea is to teach a model with my dataset (messages) so it tries to understand how I behave, how I text and what words I use and then run a Discord bo...
2025-07-21T15:58:41
https://www.reddit.com/r/LocalLLaMA/comments/1m5mrmy/help_with_choosing_model_to_create_bot_that_will/
deadyasiu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5mrmy
false
null
t3_1m5mrmy
/r/LocalLLaMA/comments/1m5mrmy/help_with_choosing_model_to_create_bot_that_will/
false
false
self
2
null
mistral-small-3.2 OCR accuracy way too bad with llama.cpp compared to ollama?
1
Hi, I have evaluated mistral small 3.2 for OCR tasks using ollama. The accuracy has been very satisfying while some bugs cause it to run on CPU solely with a rtx 4090 (about 5t/s). So I switched to llama.cpp and obtain between 20-40t/s using the model + mmproj from unsloth. Both models are Q4\_K\_M. The accuracy is ...
2025-07-21T15:53:30
https://www.reddit.com/r/LocalLLaMA/comments/1m5mms1/mistralsmall32_ocr_accuracy_way_too_bad_with/
caetydid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5mms1
false
null
t3_1m5mms1
/r/LocalLLaMA/comments/1m5mms1/mistralsmall32_ocr_accuracy_way_too_bad_with/
false
false
self
1
null
Looking for Open Source STT Tool to Detect Script Reading Errors in Real Time
1
Hello everyone, I'm looking for an open source that could help me with real-time audio-to-text comparison. I want to capture the actor's live voice from Pro Tools, and compare what they say against a provided script ( PDF or TXT) — ideally in real time — to detect omissions, extra words, or misread lines. Even if...
2025-07-21T15:50:01
https://www.reddit.com/r/LocalLLaMA/comments/1m5mjoc/looking_for_open_source_stt_tool_to_detect_script/
hydrant_DnB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5mjoc
false
null
t3_1m5mjoc
/r/LocalLLaMA/comments/1m5mjoc/looking_for_open_source_stt_tool_to_detect_script/
false
false
self
1
null
SmolLM3-3B training logs and intermediate checkpoints
54
2025-07-21T15:31:24
https://i.redd.it/fcyltq1nx8ef1.png
eliebakk
i.redd.it
1970-01-01T00:00:00
0
{}
1m5m1et
false
null
t3_1m5m1et
/r/LocalLLaMA/comments/1m5m1et/smollm33b_training_logs_and_intermediate/
false
false
default
54
{'enabled': True, 'images': [{'id': 'fcyltq1nx8ef1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/fcyltq1nx8ef1.png?width=108&crop=smart&auto=webp&s=7312a97ee7ebc5986c39d4b65b8d2c7104ed72bb', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/fcyltq1nx8ef1.png?width=216&crop=smart&auto=web...
Is there a better local TTS than Kokoro, even if its slower to generate?
13
I dont need near real time TTS at all, i am happy with even 0.5x realtime generation. Is there actually a better model than Kokoro but with the trade off of being slower/larger, or is Kokoro not only the best model but also really fast?
2025-07-21T15:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1m5lwo6/is_there_a_better_local_tts_than_kokoro_even_if/
Sad_Holiday_7435
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5lwo6
false
null
t3_1m5lwo6
/r/LocalLLaMA/comments/1m5lwo6/is_there_a_better_local_tts_than_kokoro_even_if/
false
false
self
13
null
UIGEN-X 8B supports React Headless, Flutter, React Native, Static Site Generators, Tauri, Vue, Gradio/Python, Tailwind, and prompt-based design. GGUF/GPTQ/MLX Available
33
[https://huggingface.co/Tesslate/UIGEN-X-8B](https://huggingface.co/Tesslate/UIGEN-X-8B) Just wanted to share a quick prompting guide for UIGEN-X (and that quants are available). Craft any system prompt (its not specific, so it will listen to you!) So type out your prompt like this: * \[Action\] \[UI type or page\]...
2025-07-21T15:09:43
https://www.reddit.com/gallery/1m5lgtr
United-Rush4073
reddit.com
1970-01-01T00:00:00
0
{}
1m5lgtr
false
null
t3_1m5lgtr
/r/LocalLLaMA/comments/1m5lgtr/uigenx_8b_supports_react_headless_flutter_react/
false
false
https://b.thumbs.redditm…ALhUrjGFYPnA.jpg
33
null
Best Local Models Per Budget Per Use Case
3
Hey all. I am new to AI and Ollama. I have a 5070 TI and am running a bunch of 7b and a few 13b models and am wondering what some of your favorite models are for programming, general use, or pdf/image parsing. I'm interested in models that are below and above my GPUs thresholds. My lower models hallucinate way too much...
2025-07-21T15:08:59
https://www.reddit.com/r/LocalLLaMA/comments/1m5lg47/best_local_models_per_budget_per_use_case/
Expensive-Fail3009
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5lg47
false
null
t3_1m5lg47
/r/LocalLLaMA/comments/1m5lg47/best_local_models_per_budget_per_use_case/
false
false
self
3
null
Chatterbox tts microphone results
6
;tldr when voice cloning use a high-end microphone not the one built-in to your computer/airpods I have a child that has reading difficulties. They need to be able to read 15 books this coming year and I was lucky enough to be able to find out what those 15 books are. Many of them are from the 1920s and earlier. They...
2025-07-21T15:07:58
https://www.reddit.com/r/LocalLLaMA/comments/1m5lf6l/chatterbox_tts_microphone_results/
olympics2022wins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5lf6l
false
null
t3_1m5lf6l
/r/LocalLLaMA/comments/1m5lf6l/chatterbox_tts_microphone_results/
false
false
self
6
null
Seeking the newest coding models, especially for SQL?
0
Are there any newer models (<50 days old) that are well equipped to handle coding, especially in SQL? Hoping to find something under 24b. Currently running: * unsloth qwen3-14b Q4\_K\_S for general tasks * mistralai/mistral-small-3.2 for some stuff like writing * qwen2.5-coder-14b-instruct-q4\_k\_m - general coding t...
2025-07-21T15:00:54
https://www.reddit.com/r/LocalLLaMA/comments/1m5l8e4/seeking_the_newest_coding_models_especially_for/
datascientist2964
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5l8e4
false
null
t3_1m5l8e4
/r/LocalLLaMA/comments/1m5l8e4/seeking_the_newest_coding_models_especially_for/
false
false
self
0
null
Office hours for cloud GPU
6
Hi everyone! I recently built an office hours page for anyone who has questions on cloud GPUs or GPUs in general. we are a bunch of engineers who've built at Google, Dropbox, Alchemy, Tesla etc. and would love to help anyone who has questions in this area. [https://computedeck.com/office-hours](https://computedeck.com...
2025-07-21T14:57:29
https://www.reddit.com/r/LocalLLaMA/comments/1m5l52r/office_hours_for_cloud_gpu/
No-Scarcity-8746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5l52r
false
null
t3_1m5l52r
/r/LocalLLaMA/comments/1m5l52r/office_hours_for_cloud_gpu/
false
false
self
6
null
Common folder for model storage?
3
Every runtime has its own folder for model storage, but in a lot of cases this means downloading the same model multiple times and using extra disk space. Do we think there could be a standard "common" location for models? e.g., why don't I have a "gguf" folder for everyone to use?
2025-07-21T14:53:01
https://www.reddit.com/r/LocalLLaMA/comments/1m5l0v3/common_folder_for_model_storage/
mherf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5l0v3
false
null
t3_1m5l0v3
/r/LocalLLaMA/comments/1m5l0v3/common_folder_for_model_storage/
false
false
self
3
null
FULL Windsurf System Prompt and Tools [UPDATED, Wave 11]
7
(Latest update: 21/07/2025) I've just extracted the FULL Windsurf system prompt and internal tools (Wave 11 update). Over 500 lines (Around 9.6k tokens). You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools/tree/main/Windsurf](https://github.com/x1xhlol/system-prompts-and-model...
2025-07-21T14:41:49
https://www.reddit.com/r/LocalLLaMA/comments/1m5kqk8/full_windsurf_system_prompt_and_tools_updated/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5kqk8
false
null
t3_1m5kqk8
/r/LocalLLaMA/comments/1m5kqk8/full_windsurf_system_prompt_and_tools_updated/
false
false
self
7
null
What free TTS is the best to clone my voice for reading large portions of text?
0
I need it to be as similar as possible with my voice, so people on Youtube won't notice if I'm using my voice or a TTS. Also I have only a nvidia GTX 1660 Super with 6 GB of ram, so I don't want to clone it every time I have a text, just clone at a time with the best, leave it for a couple of hours, and hen use it eac...
2025-07-21T14:37:46
https://www.reddit.com/r/LocalLLaMA/comments/1m5kmxl/what_free_tts_is_the_best_to_clone_my_voice_for/
shaggy98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5kmxl
false
null
t3_1m5kmxl
/r/LocalLLaMA/comments/1m5kmxl/what_free_tts_is_the_best_to_clone_my_voice_for/
false
false
self
0
null
As the creators of react-native-executorch, we built an open-source app for testing ExecuTorch LLMs on mobile.
7
Hey everyone, We’re the team at Software Mansion, the creators and maintainers of the **react-native-executorch** library, which allows developers to run PyTorch ExecuTorch models inside React Native apps. After releasing the library, we realized a major hurdle for the community was the lack of a simple way to test, ...
2025-07-21T14:21:57
https://v.redd.it/wklalir6l8ef1
K4anan
v.redd.it
1970-01-01T00:00:00
0
{}
1m5k88s
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wklalir6l8ef1/DASHPlaylist.mpd?a=1755699730%2CZTFhZDI2NGQyYTczMjg0NDM5YTM1N2RkNTA3Y2RlNzNlNTBjMTlhZDQ1MDlmYTEyNzRjYmNkZjhjM2M0ZTI3OA%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/wklalir6l8ef1/DASH_1080.mp4?source=fallback', 'h...
t3_1m5k88s
/r/LocalLLaMA/comments/1m5k88s/as_the_creators_of_reactnativeexecutorch_we_built/
false
false
https://external-preview…122845cd4e3bf129
7
{'enabled': False, 'images': [{'id': 'cHR6ODJocjZsOGVmMZV30yMKuQdI_rwvJdDlghpHCSx7AthMheshPNDWRdWC', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cHR6ODJocjZsOGVmMZV30yMKuQdI_rwvJdDlghpHCSx7AthMheshPNDWRdWC.png?width=108&crop=smart&format=pjpg&auto=webp&s=5c123e9bd18470d9a529c9f9011d628a6e565...
What is the cheapest way to run unsloth/Kimi-K2-Instruct-GGUF BF16 in the cloud?
0
The above file is ~2TB in size. I went to HyperStack and the A100 80GB GPU was like ~1.35/hr. to run. So, I gave them $5 and signed up. I have zero GPU cloud experience and I didn't realize that the 2TB SSD I would be renting from them would come out to roughly $140/mo...or about the same cost as a brand new 2TB SSD....
2025-07-21T14:18:46
https://www.reddit.com/r/LocalLLaMA/comments/1m5k5di/what_is_the_cheapest_way_to_run/
OsakaSeafoodConcrn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5k5di
false
null
t3_1m5k5di
/r/LocalLLaMA/comments/1m5k5di/what_is_the_cheapest_way_to_run/
false
false
self
0
null
MacBook Air M3 24 GB Ram best LOCAL LLM for email drafting, Reddit posts, and light coding?
1
Hi folks, sanity check. I have a MacBook Air M3 with 24 GB RAM and 512 GB SSD. I want to run a local LLM for (1) drafting emails, (2) writing posts, and (3) occasional Python/JavaScript coding help (no huge repos, just snippets or debugging). From what I’ve read, Llama 3.1 8B Instruct (4-bit Q4_K_M) is solid for tex...
2025-07-21T14:02:57
https://www.reddit.com/r/LocalLLaMA/comments/1m5jr4s/macbook_air_m3_24_gb_ram_best_local_llm_for_email/
ihllegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5jr4s
false
null
t3_1m5jr4s
/r/LocalLLaMA/comments/1m5jr4s/macbook_air_m3_24_gb_ram_best_local_llm_for_email/
false
false
self
1
null
[New Architecture] Hierarchical Reasoning Model
80
Inspired by the brain's hierarchical processing, HRM unlocks unprecedented reasoning capabilities on complex tasks like ARC-AGI and solving master-level Sudoku using just 1k training examples, without any pretraining or CoT. Though not a general language model yet, with significant computational depth, HRM possibly un...
2025-07-21T14:02:52
https://www.reddit.com/r/LocalLLaMA/comments/1m5jr1v/new_architecture_hierarchical_reasoning_model/
imonenext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5jr1v
false
null
t3_1m5jr1v
/r/LocalLLaMA/comments/1m5jr1v/new_architecture_hierarchical_reasoning_model/
false
false
https://a.thumbs.redditm…uwhEIvPibXE8.jpg
80
null
[New Architecture] Hierarchical Reasoning Model 🧠
0
Inspired by the brain's hierarchical processing, HRM unlocks unprecedented reasoning capabilities on complex tasks like ARC-AGI and solving master-level Sudoku using just 1k training examples, without any pretraining or CoT. Though not a general language model yet, with significant computational depth, HRM possibly un...
2025-07-21T13:56:39
https://www.reddit.com/r/LocalLLaMA/comments/1m5jl9e/new_architecture_hierarchical_reasoning_model/
imonenext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5jl9e
false
null
t3_1m5jl9e
/r/LocalLLaMA/comments/1m5jl9e/new_architecture_hierarchical_reasoning_model/
false
false
https://b.thumbs.redditm…nDGvh1ZcAueI.jpg
0
null
The reason why local models are better/necessary.
283
2025-07-21T13:30:11
https://i.redd.it/vdngpglhb8ef1.png
GPTshop_ai
i.redd.it
1970-01-01T00:00:00
0
{}
1m5iymb
false
null
t3_1m5iymb
/r/LocalLLaMA/comments/1m5iymb/the_reason_why_local_models_are_betternecessary/
false
false
default
283
{'enabled': True, 'images': [{'id': 'vdngpglhb8ef1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/vdngpglhb8ef1.png?width=108&crop=smart&auto=webp&s=8f4c8c8ea760457e111d37a839bbe4882b86b520', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/vdngpglhb8ef1.png?width=216&crop=smart&auto=web...
How does llama 4 perform within 8192 tokens?
5
https://semianalysis.com/2025/07/11/meta-superintelligence-leadership-compute-talent-and-data/ If a large part of Llama 4’s issues come from its attention chunking, then does llama 4 perform better within a single chunk? If we limit it to 8192 tokens (party like it’s 2023 lol) does it do okay? How does Llama 4 perfo...
2025-07-21T13:12:05
https://www.reddit.com/r/LocalLLaMA/comments/1m5ijhw/how_does_llama_4_perform_within_8192_tokens/
DepthHour1669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ijhw
false
null
t3_1m5ijhw
/r/LocalLLaMA/comments/1m5ijhw/how_does_llama_4_perform_within_8192_tokens/
false
false
self
5
{'enabled': False, 'images': [{'id': 'AkDm1vMK5drNSxMCCBXiLfoVmou_ZXYzxzpyyqp3sp4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/AkDm1vMK5drNSxMCCBXiLfoVmou_ZXYzxzpyyqp3sp4.png?width=108&crop=smart&auto=webp&s=687935d5a0c79dc71a36c53a4a1ded1099fca1b3', 'width': 108}, {'height': 144, 'url': 'h...
What if Meta really has the best AI? Hear me out.
0
I keep wondering how Meta could have screwed up llama4 so bad and then released it? At this point, everyone knows how to train a model and if you have the data and compute you can really release something good. The bigger the "smarter" They obviously know what to do based on what we saw with llama3.3, we even saw...
2025-07-21T12:28:40
https://www.reddit.com/r/LocalLLaMA/comments/1m5hksu/what_if_meta_really_has_the_best_ai_hear_me_out/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5hksu
false
null
t3_1m5hksu
/r/LocalLLaMA/comments/1m5hksu/what_if_meta_really_has_the_best_ai_hear_me_out/
false
false
self
0
null
How far are we from *convenient* local models supremacy?
0
I mean running o3-like models or better on smartphones/laptop NPUs with only a few watts of power, in an "easy way" for typical consumers and non technical people. I bet 2 years away
2025-07-21T12:04:33
https://www.reddit.com/r/LocalLLaMA/comments/1m5h2td/how_far_are_we_from_convenient_local_models/
Element_H2O
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5h2td
false
null
t3_1m5h2td
/r/LocalLLaMA/comments/1m5h2td/how_far_are_we_from_convenient_local_models/
false
false
self
0
null
I extracted the system prompts from closed-source tools like Cursor & v0. The repo just hit 70k stars.
377
Hello there, My project to extract and collect the "secret" system prompts from a bunch of proprietary AI tools just passed 70k stars on GitHub, and I wanted to share it with this community specifically because I think it's incredibly useful. **The idea is to see the advanced "prompt architecture" that companies like...
2025-07-21T11:56:42
https://www.reddit.com/r/LocalLLaMA/comments/1m5gwzs/i_extracted_the_system_prompts_from_closedsource/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5gwzs
false
null
t3_1m5gwzs
/r/LocalLLaMA/comments/1m5gwzs/i_extracted_the_system_prompts_from_closedsource/
false
false
self
377
null
Which uncensored model that supports MCP can you recommend?
0
Anything above 8B that won't restrict anything due to ethics and can connect to MCP tools?
2025-07-21T11:42:46
https://www.reddit.com/r/LocalLLaMA/comments/1m5gnfm/which_uncensored_model_that_supports_mcp_can_you/
ReasonableGarden9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5gnfm
false
null
t3_1m5gnfm
/r/LocalLLaMA/comments/1m5gnfm/which_uncensored_model_that_supports_mcp_can_you/
false
false
self
0
null
What are people fine-tuning their models for?
23
Hey, I'm curious, what are people fine-tuning their models for? I was working in a company where we fine-tuned models to better deal with product images, but the company couldn't keep the lights on. Most agencies, companies, freelancers, seem to use off-the-shelf models, which are getting "good enough" for the job. ...
2025-07-21T11:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1m5gmfr/what_are_people_finetuning_their_models_for/
MKBSP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5gmfr
false
null
t3_1m5gmfr
/r/LocalLLaMA/comments/1m5gmfr/what_are_people_finetuning_their_models_for/
false
false
self
23
null
Chat webinterface for small company
5
Hi, I need a web interface for my local model but i need multi user support. Meaning i need a login and everyone needs their own chat history. Any ideas? (google and chatgpt/... were not helpful)
2025-07-21T11:39:33
https://www.reddit.com/r/LocalLLaMA/comments/1m5gl6e/chat_webinterface_for_small_company/
_ralph_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5gl6e
false
null
t3_1m5gl6e
/r/LocalLLaMA/comments/1m5gl6e/chat_webinterface_for_small_company/
false
false
self
5
null
72$ for Instinct MI50 16GB
5
I can have my hands on about 100 MI50 16GB for 72$ each. Is this a good choice over rtx 3060 12gb (265$ used)? How about dual MI50?
2025-07-21T11:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1m5ghs0/72_for_instinct_mi50_16gb/
jetaudio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ghs0
false
null
t3_1m5ghs0
/r/LocalLLaMA/comments/1m5ghs0/72_for_instinct_mi50_16gb/
false
false
self
5
null
I collected the system prompts from closed-source tools like Cursor & v0 so we can use their techniques on our local models. The repo just hit 70k stars.
0
Hello there, My project to collect the "secret" system prompts from a bunch of proprietary AI tools just passed 70k stars on GitHub, and I wanted to share it with this community specifically because I think it's incredibly useful for what we all do here. **The whole idea is to see the advanced "prompt architecture" t...
2025-07-21T11:34:21
https://www.reddit.com/r/LocalLLaMA/comments/1m5ghmb/i_collected_the_system_prompts_from_closedsource/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ghmb
false
null
t3_1m5ghmb
/r/LocalLLaMA/comments/1m5ghmb/i_collected_the_system_prompts_from_closedsource/
false
false
self
0
null
What programming language do AI Models have the best data on
0
Tl;Dr: Microsoft API is confusing itself and the models, what should I use instead? And are there tool calls (agents?) that help models produce valid xml? Hello, I'm currently trying to get into learning more about how I can improve my workflow with AI. So far I'm playing around with Qwen3 30b MoE and kimi-dev 72b mo...
2025-07-21T11:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1m5gcvl/what_programming_language_do_ai_models_have_the/
md_youdneverguess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5gcvl
false
null
t3_1m5gcvl
/r/LocalLLaMA/comments/1m5gcvl/what_programming_language_do_ai_models_have_the/
false
false
self
0
null
Is it possible to have a specialized local llm perform at the level of cloud based models?
0
I want to eventually build my own pc and host locally, mostly for the sake of reliability and not being reliant on the big guys in the bizz. My main issue is that models such as Sonnet and Opus 4, even sonnet 3.5 performs so much better when it comes to coding, than what I've seen any locally run models being capable ...
2025-07-21T11:02:17
https://www.reddit.com/r/LocalLLaMA/comments/1m5fwpz/is_it_possible_to_have_a_specialized_local_llm/
Relative_Mouse7680
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5fwpz
false
null
t3_1m5fwpz
/r/LocalLLaMA/comments/1m5fwpz/is_it_possible_to_have_a_specialized_local_llm/
false
false
self
0
null
Rockchip unveils RK182X LLM co-processor: Runs Qwen 2.5 7B at 50TPS decode, 800TPS prompt processing
148
I believe this is the first NPU specifically designed for LLM inference. They specifically mention 2.5 or 5GB of "ultra high bandwidth memory", but not the actual speed. 50TPS for a 7B model at Q4 implies around 200GB/s. The high prompt processing speed is the best part IMO, it's going to let an on device assistant use...
2025-07-21T10:46:03
https://www.cnx-software.com/2025/07/18/rockchip-unveils-rk3668-10-core-arm-cortex-a730-cortex-a530-soc-with-16-tops-npu-rk182x-llm-vlm-co-processor/#rockchip-rk182x-llm-vlm-accelerator
PmMeForPCBuilds
cnx-software.com
1970-01-01T00:00:00
0
{}
1m5fmlp
false
null
t3_1m5fmlp
/r/LocalLLaMA/comments/1m5fmlp/rockchip_unveils_rk182x_llm_coprocessor_runs_qwen/
false
false
https://external-preview…93c2419a717f902a
148
{'enabled': False, 'images': [{'id': 'p-XdyFJrlRnofvAjkk2RhNaWbyuM0y_S5JEPvTprq-8', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/p-XdyFJrlRnofvAjkk2RhNaWbyuM0y_S5JEPvTprq-8.jpeg?width=108&crop=smart&auto=webp&s=19d2a2efbd9333bbc8e7495d96c37dc9a67f94f7', 'width': 108}, {'height': 158, 'url': '...
My (practical) dual 3090 setup for local inference
11
I completed my local LLM rig in May, just after Qwen3's release (thanks to r/LocalLLaMA 's folks for the invaluable guidance!). Now that I've settled into the setup, I'm excited to share my build and how it's performing with local LLMs. This is a consumer-grade rig optimized for running Qwen3-30B-A3B and similar model...
2025-07-21T10:43:20
https://www.reddit.com/r/LocalLLaMA/comments/1m5fkts/my_practical_dual_3090_setup_for_local_inference/
ColdImplement1319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5fkts
false
null
t3_1m5fkts
/r/LocalLLaMA/comments/1m5fkts/my_practical_dual_3090_setup_for_local_inference/
false
false
https://a.thumbs.redditm…pKoUwojUDik4.jpg
11
null
My (practical) 3090 x2 setup for local inference
1
[removed]
2025-07-21T10:39:29
https://www.reddit.com/gallery/1m5fih8
nearlytheru
reddit.com
1970-01-01T00:00:00
0
{}
1m5fih8
false
null
t3_1m5fih8
/r/LocalLLaMA/comments/1m5fih8/my_practical_3090_x2_setup_for_local_inference/
false
false
https://b.thumbs.redditm…8DUALI05DPPk.jpg
1
null
HOWTO summarize on 16GB VRAM with 64k cache?
0
Hey there, I have a RX 7800 XT 16GB and a summary prompt, looking for a model to run it. What are my issues? There are basically 2 main issues I have faced: 1. Long context 32/64k tokens. 2. Multi language. I have noticed that all models that give pretty decent quality are about 20b+ size. Quantized version can fit i...
2025-07-21T10:35:20
https://www.reddit.com/r/LocalLLaMA/comments/1m5fg2y/howto_summarize_on_16gb_vram_with_64k_cache/
COBECT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5fg2y
false
null
t3_1m5fg2y
/r/LocalLLaMA/comments/1m5fg2y/howto_summarize_on_16gb_vram_with_64k_cache/
false
false
self
0
null
A practical dual 3090 setup for local inference
1
I completed my local LLM rig in May, just after Qwen3's release (thanks to r/LocalLLaMA 's folks for the invaluable guidance!). Now that I've settled into the setup, I'm excited to share my build and how it's performing with local LLMs. This is a consumer-grade rig optimized for running Qwen3-30B-A3B and similar model...
2025-07-21T10:34:08
https://www.reddit.com/r/LocalLLaMA/comments/1m5ffcp/a_practical_dual_3090_setup_for_local_inference/
nearlytheru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ffcp
false
null
t3_1m5ffcp
/r/LocalLLaMA/comments/1m5ffcp/a_practical_dual_3090_setup_for_local_inference/
false
false
self
1
null
Ryzen AI HX 370 or Mx Pro for travellers
3
Hello, I've been watching this thread for a while now and I'm looking for a laptop at around the 1500eur mark, and i can not decide for my usecase. I'm trying to build something basic, yet challenging. The plan is to make a local law assistant using RAG, and learn more about the usecases of local LLMs. My problem i...
2025-07-21T10:33:37
https://www.reddit.com/r/LocalLLaMA/comments/1m5ff1k/ryzen_ai_hx_370_or_mx_pro_for_travellers/
0ner0z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ff1k
false
null
t3_1m5ff1k
/r/LocalLLaMA/comments/1m5ff1k/ryzen_ai_hx_370_or_mx_pro_for_travellers/
false
false
self
3
null
NVIDIA Brings Reasoning Models to Consumers Ranging from 1.5B to 32B Parameters
116
2025-07-21T10:29:24
https://www.techpowerup.com/339089/nvidia-brings-reasoning-models-to-consumers-ranging-from-1-5b-to-32b-parameters
OwnWitness2836
techpowerup.com
1970-01-01T00:00:00
0
{}
1m5fcdo
false
null
t3_1m5fcdo
/r/LocalLLaMA/comments/1m5fcdo/nvidia_brings_reasoning_models_to_consumers/
false
false
default
116
{'enabled': False, 'images': [{'id': 'bStjeji8oH-vX7nEL2-gqIEn5srknBBEzSyJDD_6lLE', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/bStjeji8oH-vX7nEL2-gqIEn5srknBBEzSyJDD_6lLE.jpeg?width=108&crop=smart&auto=webp&s=ac4eb006eed458f45d94d2092f2b1d96e9111985', 'width': 108}, {'height': 171, 'url': '...
Dropped a full LLM customization shell (free/paid hybrid)
1
[removed]
2025-07-21T10:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1m5f4gs/dropped_a_full_llm_customization_shell_freepaid/
luxestudiosai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5f4gs
false
null
t3_1m5f4gs
/r/LocalLLaMA/comments/1m5f4gs/dropped_a_full_llm_customization_shell_freepaid/
false
false
self
1
null
Dropped a full LLM customization shell (free/paid hybrid)
1
[removed]
2025-07-21T10:08:07
https://www.reddit.com/r/LocalLLaMA/comments/1m5ezb2/dropped_a_full_llm_customization_shell_freepaid/
luxestudiosai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ezb2
false
null
t3_1m5ezb2
/r/LocalLLaMA/comments/1m5ezb2/dropped_a_full_llm_customization_shell_freepaid/
false
false
nsfw
1
{'enabled': False, 'images': [{'id': 'uY2UOo_imFbnVi5sGvN6k6UK7HenUIGLMBD6pg_7_SY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uY2UOo_imFbnVi5sGvN6k6UK7HenUIGLMBD6pg_7_SY.png?width=108&crop=smart&auto=webp&s=91087acfea1d4891a7b187e71daf99128d54e55f', 'width': 108}, {'height': 216, 'url': '...
Offline Coding Assistant
0
Hi everyone 👋 I am trying to build an offline coding assistant. For that I have to do POC. Anyone having any idea about this? To implement this in limited environment?
2025-07-21T10:03:08
https://www.reddit.com/r/LocalLLaMA/comments/1m5ew98/offline_coding_assistant/
eternalHarsh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5ew98
false
null
t3_1m5ew98
/r/LocalLLaMA/comments/1m5ew98/offline_coding_assistant/
false
false
self
0
null
Can any tool dub an entire Movie into another language?
1
Curious :-)
2025-07-21T10:00:47
https://www.reddit.com/r/LocalLLaMA/comments/1m5eulr/can_any_tool_dub_an_entire_movie_into_another/
Trysem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5eulr
false
null
t3_1m5eulr
/r/LocalLLaMA/comments/1m5eulr/can_any_tool_dub_an_entire_movie_into_another/
false
false
self
1
null
This AI doesn’t chat. It obeys.
1
[removed]
2025-07-21T09:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1m5eqj2/this_ai_doesnt_chat_it_obeys/
luxestudiosai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5eqj2
false
null
t3_1m5eqj2
/r/LocalLLaMA/comments/1m5eqj2/this_ai_doesnt_chat_it_obeys/
true
false
spoiler
1
null
which model I can run my Macbook ?
1
[removed]
2025-07-21T09:16:27
https://www.reddit.com/r/LocalLLaMA/comments/1m5e59x/which_model_i_can_run_my_macbook/
im_mike_b
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5e59x
false
null
t3_1m5e59x
/r/LocalLLaMA/comments/1m5e59x/which_model_i_can_run_my_macbook/
false
false
self
1
null
How System Prompts Are Changing the Game for AI Tools
0
I want to share a project I’ve been working on that’s had some massive traction: a **repository** of **system prompts** and **AI models** that’s already garnered over **70,000 stars** on GitHub. If you haven’t seen it yet, this collection might change the way you approach working with AI tools, whether it’s for local m...
2025-07-21T08:59:12
https://www.reddit.com/r/LocalLLaMA/comments/1m5dve9/how_system_prompts_are_changing_the_game_for_ai/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5dve9
false
null
t3_1m5dve9
/r/LocalLLaMA/comments/1m5dve9/how_system_prompts_are_changing_the_game_for_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': '3J3iRUCJuzk-_Ka1tn-hg8r5ofkLMUYaOdxx_goyGNc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3J3iRUCJuzk-_Ka1tn-hg8r5ofkLMUYaOdxx_goyGNc.png?width=108&crop=smart&auto=webp&s=713542cecbbdf7b9088a103dbe84f99ea6238c0b', 'width': 108}, {'height': 108, 'url': 'h...
How does LLMs get more creative?
0
So, Kimi K2 is out, and it's currently topping benchmarks in creative writing. I was wondering,how exactly do LLMs become more creative? From what I know, Kimi K2 uses DeepSeek's architecture but with more experts. So is improving creative writing mostly about scaling the model (more parameters, more experts) and not r...
2025-07-21T08:49:29
https://www.reddit.com/r/LocalLLaMA/comments/1m5dq1e/how_does_llms_get_more_creative/
ba2sYd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m5dq1e
false
null
t3_1m5dq1e
/r/LocalLLaMA/comments/1m5dq1e/how_does_llms_get_more_creative/
false
false
self
0
null