title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Model Compression Results: Kimi-K2 (1.07T→32.5B parameters)
1
[removed]
2025-07-12T19:53:17
https://www.reddit.com/r/LocalLLaMA/comments/1ly9g6r/model_compression_results_kimik2_107t325b/
Sad-Slide9083
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly9g6r
false
null
t3_1ly9g6r
/r/LocalLLaMA/comments/1ly9g6r/model_compression_results_kimik2_107t325b/
false
false
self
1
null
Kimi-K2-Mini: Successfully compressed Kimi-K2 from 1.07T to   32.5B parameters (97% reduction) - runs on single H100
1
[removed]
2025-07-12T19:51:04
https://www.reddit.com/r/LocalLLaMA/comments/1ly9ef0/kimik2mini_successfully_compressed_kimik2_from/
Sad-Slide9083
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly9ef0
false
null
t3_1ly9ef0
/r/LocalLLaMA/comments/1ly9ef0/kimik2mini_successfully_compressed_kimik2_from/
false
false
self
1
null
Local Llama with Home Assistant Integration and Multilingual-Fuzzy naming
11
Hello everyone! First time poster - thought I'd share a project I've been working on - it's local LLama integration with HA and custom functions outside of HA; my main goal was to have a system that could understand descriptions of items instead of hard-names (like "turn on the light above the desk" instead of "turn on...
2025-07-12T19:43:22
https://www.reddit.com/r/LocalLLaMA/comments/1ly983h/local_llama_with_home_assistant_integration_and/
NicolaZanarini533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly983h
false
null
t3_1ly983h
/r/LocalLLaMA/comments/1ly983h/local_llama_with_home_assistant_integration_and/
false
false
self
11
{'enabled': False, 'images': [{'id': 'rfWFqkf9v8783kPkMuv2GffIRIrnMh42wtNX-UcWXDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rfWFqkf9v8783kPkMuv2GffIRIrnMh42wtNX-UcWXDU.png?width=108&crop=smart&auto=webp&s=d129aaa0a1c8de108becec1331e907baf548e00e', 'width': 108}, {'height': 108, 'url': 'h...
Local LLama Integrated with Home Assistant for Mutilingual-Fuzzy naming.
2
Hello everyone! First time poster - thought I'd share a project I've been working on - it's local LLama integration with HA and custom functions outside of HA; my main goal was to have a system that could understand descriptions of items instead of hard-names (like "turn on the light above the desk" instead of "turn on...
2025-07-12T19:30:27
https://www.reddit.com/r/LocalLLaMA/comments/1ly8xmz/local_llama_integrated_with_home_assistant_for/
ConsequenceDapper454
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly8xmz
false
null
t3_1ly8xmz
/r/LocalLLaMA/comments/1ly8xmz/local_llama_integrated_with_home_assistant_for/
false
false
self
2
{'enabled': False, 'images': [{'id': 'rfWFqkf9v8783kPkMuv2GffIRIrnMh42wtNX-UcWXDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rfWFqkf9v8783kPkMuv2GffIRIrnMh42wtNX-UcWXDU.png?width=108&crop=smart&auto=webp&s=d129aaa0a1c8de108becec1331e907baf548e00e', 'width': 108}, {'height': 108, 'url': 'h...
This whole thing is giving me WizardLM2 vibes.
232
2025-07-12T19:09:44
https://i.redd.it/kn56m7cgshcf1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1ly8fyj
false
null
t3_1ly8fyj
/r/LocalLLaMA/comments/1ly8fyj/this_whole_thing_is_giving_me_wizardlm2_vibes/
false
false
https://a.thumbs.redditm…mwSpvxALQEM4.jpg
232
{'enabled': True, 'images': [{'id': 'GepSeYli2R4WjSQ1YHbDEYhIkFkRkNdQpj4AK3DFoS4', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/kn56m7cgshcf1.jpeg?width=108&crop=smart&auto=webp&s=6d47fc551fde0f69d3b2c5b28f04df0cfdf9c410', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/kn56m7cgshcf1.j...
mlx-community/Kimi-Dev-72B-4bit-DWQ
49
2025-07-12T19:01:40
https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit-DWQ
Recoil42
huggingface.co
1970-01-01T00:00:00
0
{}
1ly894z
false
null
t3_1ly894z
/r/LocalLLaMA/comments/1ly894z/mlxcommunitykimidev72b4bitdwq/
false
false
default
49
{'enabled': False, 'images': [{'id': 'mpAxC0SvYZFldRQAxpLzzBwXNoDqpak4MT8PL4S0bMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mpAxC0SvYZFldRQAxpLzzBwXNoDqpak4MT8PL4S0bMw.png?width=108&crop=smart&auto=webp&s=3e7e74ad5490178118f9e4b34721c8061ae3ecfe', 'width': 108}, {'height': 116, 'url': 'h...
Introducing GGUF Tool Suite - Create and Optimise Quantisation Mix for DeepSeek-R1-0528 for Your Own Specs
18
Hi everyone, I’ve developed a tool that calculates the *optimal quantisation mix* tailored to your VRAM and RAM specifications specifically for the DeepSeek-R1-0528 model. If you’d like to try it out, you can find it here: 🔗 [GGUF Tool Suite on GitHub](https://github.com/Thireus/GGUF-Tool-Suite/) You can also crea...
2025-07-12T18:56:52
https://www.reddit.com/r/LocalLLaMA/comments/1ly84xd/introducing_gguf_tool_suite_create_and_optimise/
Thireus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly84xd
false
null
t3_1ly84xd
/r/LocalLLaMA/comments/1ly84xd/introducing_gguf_tool_suite_create_and_optimise/
false
false
self
18
{'enabled': False, 'images': [{'id': 'mj_iC-YyHvPh4mHsOjFii7sajPyqVswMWvlO_8Xqgks', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mj_iC-YyHvPh4mHsOjFii7sajPyqVswMWvlO_8Xqgks.png?width=108&crop=smart&auto=webp&s=791d882324dd951016663a2ce43b55901557ce50', 'width': 108}, {'height': 108, 'url': 'h...
[Rust] qwen3-rs: Educational Qwen3 Architecture Inference (No Python, Minimal Deps)
31
Hey all! I've just released my [qwen3-rs](vscode-file://vscode-app/snap/code/198/usr/share/code/resources/app/out/vs/code/electron-sandbox/workbench/workbench.html), a Rust project for running and exporting Qwen3 models (Qwen3-0.6B, 4B, 8B, DeepSeek-R1-0528-Qwen3-8B, etc) with minimal dependencies and no Python requi...
2025-07-12T18:41:55
https://www.reddit.com/r/LocalLLaMA/comments/1ly7sb0/rust_qwen3rs_educational_qwen3_architecture/
eis_kalt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly7sb0
false
null
t3_1ly7sb0
/r/LocalLLaMA/comments/1ly7sb0/rust_qwen3rs_educational_qwen3_architecture/
false
false
self
31
{'enabled': False, 'images': [{'id': 'LxoqZs4q3Osj78IVaTXSrgUKqNHcOujrOF1Tg6_GYA4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LxoqZs4q3Osj78IVaTXSrgUKqNHcOujrOF1Tg6_GYA4.jpeg?width=108&crop=smart&auto=webp&s=b6a4a1ab699ce9984d57b0696bdd1f873de9e614', 'width': 108}, {'height': 216, 'url': ...
Kyutai Text-to-Speech is considering opening up custom voice model training, but they are asking for community support!
92
Kyutai is one of the best text to speech models, with very low latency and great accuracy at following the text prompt. And unlike most other models, it's able to generate very long audio files. It's [one of the chart leaders in benchmarks](https://www.reddit.com/r/LocalLLaMA/comments/1lqycp0/kyutai_tts_is_here_realti...
2025-07-12T17:41:49
https://www.reddit.com/r/LocalLLaMA/comments/1ly6cg6/kyutai_texttospeech_is_considering_opening_up/
pilkyton
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly6cg6
false
null
t3_1ly6cg6
/r/LocalLLaMA/comments/1ly6cg6/kyutai_texttospeech_is_considering_opening_up/
false
false
self
92
{'enabled': False, 'images': [{'id': 'jYQMX0f3Fb0bWnMi64F2HOZ_4nfDILMheNU-lebBluE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jYQMX0f3Fb0bWnMi64F2HOZ_4nfDILMheNU-lebBluE.png?width=108&crop=smart&auto=webp&s=367ade6726e17c3965008974bc2f09c6e6bc8ba5', 'width': 108}, {'height': 108, 'url': 'h...
Demo Video of AutoBE, Backend No-Coding Agent Achieving 100% Compilation Success (Open Source)
0
AutoBE, No-code agent for Backend Application, writing 100% compilable code. - **GitHub Repository**: https://github.com/wrtnlabs/autobe - **Guide Documents**: https://wrtnlabs.io/autobe/docs - **Demo Result (Generated backend applications)** - Bulletin Board System: https://github.dev/wrtnlabs/autobe-example-bbs ...
2025-07-12T17:23:05
https://v.redd.it/935jak7a9hcf1
jhnam88
/r/LocalLLaMA/comments/1ly5w3r/demo_video_of_autobe_backend_nocoding_agent/
1970-01-01T00:00:00
0
{}
1ly5w3r
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/935jak7a9hcf1/DASHPlaylist.mpd?a=1755062593%2CYzc3YzllNDRiMjA1N2EzN2M2ZWE0OTUxNDA5NjBmMzVhYmQzNDFkNjQwZDU0ZDMzODdlNzFkYTZhMzVhM2UxMg%3D%3D&v=1&f=sd', 'duration': 877, 'fallback_url': 'https://v.redd.it/935jak7a9hcf1/DASH_720.mp4?source=fallback', 'h...
t3_1ly5w3r
/r/LocalLLaMA/comments/1ly5w3r/demo_video_of_autobe_backend_nocoding_agent/
false
false
https://external-preview…be7242b86868ce4c
0
{'enabled': False, 'images': [{'id': 'b2JkcWttNGE5aGNmMZU4z1ns0YguhnSerBikWRx-xW0PkW7LeDZSsrbnGGvR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b2JkcWttNGE5aGNmMZU4z1ns0YguhnSerBikWRx-xW0PkW7LeDZSsrbnGGvR.png?width=108&crop=smart&format=pjpg&auto=webp&s=257a4dcb803c5d39064d3c229ad7d53b5ec86...
What's the most natural sounding TTS model for local right now?
51
Hey guys, I'm working on a project for multiple speakers, and was wondering what is the most natural sounding TTS model right now? I saw XTTS and ChatTTS, but those have been around for a while. Is there anything new that's local that sounds pretty good? Thanks!
2025-07-12T17:04:18
https://www.reddit.com/r/LocalLLaMA/comments/1ly5g2t/whats_the_most_natural_sounding_tts_model_for/
Siigari
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly5g2t
false
null
t3_1ly5g2t
/r/LocalLLaMA/comments/1ly5g2t/whats_the_most_natural_sounding_tts_model_for/
false
false
self
51
null
Open-Source LLM-Based Solution for Online Content Filtering - Is There One?
2
Hello. I am wondering if there's a solution that checks a url using a local llm before deciding whether to allow or disallow a connection? Use case: \- user types in a url \- url is scraped and sent to the llm \- llm decides to allow/disallow the visit as per instructions I am wondering if there's an open-source p...
2025-07-12T16:57:09
https://www.reddit.com/r/LocalLLaMA/comments/1ly59tz/opensource_llmbased_solution_for_online_content/
Southern_Sun_2106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly59tz
false
null
t3_1ly59tz
/r/LocalLLaMA/comments/1ly59tz/opensource_llmbased_solution_for_online_content/
false
false
self
2
null
New LLM DOS rig
16
Check it. 500mb ram, 500hetz cpu. Dial up. 200 watts. And it's internet ready. Sound blaster too ;] Gonna run me that new "llama" model I've been hearing so much about.
2025-07-12T16:46:42
https://www.reddit.com/gallery/1ly513g
Alienanthony
reddit.com
1970-01-01T00:00:00
0
{}
1ly513g
false
null
t3_1ly513g
/r/LocalLLaMA/comments/1ly513g/new_llm_dos_rig/
false
false
https://b.thumbs.redditm…RBvGz7cgcnJE.jpg
16
null
Okay kimi-k2 is an INSANE model WTF those one-shot animations
242
2025-07-12T16:44:50
https://v.redd.it/74d8efoh2hcf1
sirjoaco
v.redd.it
1970-01-01T00:00:00
0
{}
1ly4zh8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/74d8efoh2hcf1/DASHPlaylist.mpd?a=1754930706%2CZGI2MWMxMTQ1ZDJiMjYzYzg2MGViNDAzMDg1NGQzMDdiNGE5ZWE3N2E4MTJmZWY5MTc3MmIxZjI5Mjk3NWU3Zg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/74d8efoh2hcf1/DASH_1080.mp4?source=fallback', 'h...
t3_1ly4zh8
/r/LocalLLaMA/comments/1ly4zh8/okay_kimik2_is_an_insane_model_wtf_those_oneshot/
false
false
https://external-preview…768ea8f376a8ffc8
242
{'enabled': False, 'images': [{'id': 'amJmNzJnb2gyaGNmMR47PnkZil-qwhK39njev3B-56bQsfXI6t0qLjuoAfo4', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/amJmNzJnb2gyaGNmMR47PnkZil-qwhK39njev3B-56bQsfXI6t0qLjuoAfo4.png?width=108&crop=smart&format=pjpg&auto=webp&s=1b3a502b5cf8dd662aa70b308038b5980910d...
GPU UPGRADE!!!!NEED Suggestion!!!!.Upgrading current workstation either with 4x RTX 6000 ada or 4x L40s. Can i use NVlink bridge the pair them up.??
0
Currently i have workstation. Which is powered by AMD EPYC 7452 32 core cpu with 256GB RAM . The worksration has 5 x 4Gen pcie slots and has A100 40Gb currently running with it. So i planned to upgrade it .I wanna load all the other 4 slots with either RTX 6000 ADA for with L40S . which can i go for????, i know ther...
2025-07-12T16:42:50
https://www.reddit.com/r/LocalLLaMA/comments/1ly4xvb/gpu_upgradeneed_suggestionupgrading_current/
logii33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly4xvb
false
null
t3_1ly4xvb
/r/LocalLLaMA/comments/1ly4xvb/gpu_upgradeneed_suggestionupgrading_current/
false
false
self
0
null
RL local llm for coding
3
For folks coding daily, what models are you getting the best results with? I know there are a lot of variables, and I’d like to avoid getting bogged down in the details like performance, prompt size, parameter counts, or quantization. What models is turning in the best results for coding for you personally. For refere...
2025-07-12T16:38:00
https://www.reddit.com/r/LocalLLaMA/comments/1ly4tus/rl_local_llm_for_coding/
rts324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly4tus
false
null
t3_1ly4tus
/r/LocalLLaMA/comments/1ly4tus/rl_local_llm_for_coding/
false
false
self
3
null
Grok will now seaches for Elon view, before answering you , if you ask about politics
6
https://preview.redd.it/… getting worse.
2025-07-12T16:32:39
https://www.reddit.com/r/LocalLLaMA/comments/1ly4pci/grok_will_now_seaches_for_elon_view_before/
NeedleworkerDull7886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly4pci
false
null
t3_1ly4pci
/r/LocalLLaMA/comments/1ly4pci/grok_will_now_seaches_for_elon_view_before/
false
false
https://a.thumbs.redditm…nMYJHJYCB920.jpg
6
null
Best performing local model for json generation on raspberry pi
1
[removed]
2025-07-12T16:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1ly4mea/best_performing_local_model_for_json_generation/
YK-95
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly4mea
false
null
t3_1ly4mea
/r/LocalLLaMA/comments/1ly4mea/best_performing_local_model_for_json_generation/
false
false
self
1
null
Music Analysis - another attempt
10
In a quest to make a tamagotchi which requires piano practice to feed (and maybe organise live piano recordings) I am trying out various research projects. So far I have implemented the excellent [piano transcription](https://github.com/bytedance/piano_transcription) repo and I am getting really good MIDI back. [sc...
2025-07-12T16:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1ly476r/music_analysis_another_attempt/
Not_your_guy_buddy42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly476r
false
null
t3_1ly476r
/r/LocalLLaMA/comments/1ly476r/music_analysis_another_attempt/
false
false
https://external-preview…3aa4154cddb6e515
10
{'enabled': False, 'images': [{'id': 'z_1ffi2EBnCAbMFT2MnihKQQZrdw8qlio3KudIpri8g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z_1ffi2EBnCAbMFT2MnihKQQZrdw8qlio3KudIpri8g.png?width=108&crop=smart&auto=webp&s=fcfd3fcf8d3e8cbb8d41b131df87efcfc87a2d81', 'width': 108}, {'height': 108, 'url': 'h...
Interesting info about Kimi K2
470
Kimi K2 is basically DeepSeek V3 but with fewer heads and more experts. Source: @rasbt on X
2025-07-12T16:05:34
https://i.redd.it/klm2b78lvgcf1.jpeg
No_Conversation9561
i.redd.it
1970-01-01T00:00:00
0
{}
1ly42e5
false
null
t3_1ly42e5
/r/LocalLLaMA/comments/1ly42e5/interesting_info_about_kimi_k2/
false
false
https://b.thumbs.redditm…mao4xX-wZHPw.jpg
470
{'enabled': True, 'images': [{'id': 'Mc3JulkX7jC-xZrG5vXyMeTKbsu1euUGH0q8C22y1zs', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/klm2b78lvgcf1.jpeg?width=108&crop=smart&auto=webp&s=de7eb96ece8068540bfea48d2417469c7f222dea', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/klm2b78lvgcf1.jp...
Cactus - Edge AI Inference Framework
1
2025-07-12T15:47:17
https://cactuscompute.com/
dayanruben
cactuscompute.com
1970-01-01T00:00:00
0
{}
1ly3mrl
false
null
t3_1ly3mrl
/r/LocalLLaMA/comments/1ly3mrl/cactus_edge_ai_inference_framework/
false
false
default
1
null
I built an Al tool that replaces 5 Al tools, saved me hours.
1
[removed]
2025-07-12T15:43:43
https://www.reddit.com/r/LocalLLaMA/comments/1ly3jp8/i_built_an_al_tool_that_replaces_5_al_tools_saved/
anonymously_geek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly3jp8
false
null
t3_1ly3jp8
/r/LocalLLaMA/comments/1ly3jp8/i_built_an_al_tool_that_replaces_5_al_tools_saved/
false
false
self
1
null
Cactus - Edge AI Inference Framework
1
Cactus Compute is a startup enabling local AI on phones and wearables – privacy-focused, no cloud needed. Supports React/JS/TS, Flutter/Dart, C/C++ with unified APIs for models, tools, and more. Open-source GitHub: https://github.com/cactuscompute/cactus
2025-07-12T15:41:05
https://cactuscompute.com/
dayanruben
cactuscompute.com
1970-01-01T00:00:00
0
{}
1ly3hfp
false
null
t3_1ly3hfp
/r/LocalLLaMA/comments/1ly3hfp/cactus_edge_ai_inference_framework/
false
false
default
1
null
What do you think of Huawei's Pangu model counterfeiting behaviour?
3
I recently read an anonymous PDF entitled "Pangu's Sorry". It is a late-night confession written by an employee of Huawei Noah's Ark Laboratory, and the content is shocking. This article details the inside story of the whole process of Huawei's Pangu large model from research and development to "suspected shell", invol...
2025-07-12T15:38:14
https://www.reddit.com/r/LocalLLaMA/comments/1ly3exz/what_do_you_think_of_huaweis_pangu_model/
Disastrous-Prize-946
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly3exz
false
null
t3_1ly3exz
/r/LocalLLaMA/comments/1ly3exz/what_do_you_think_of_huaweis_pangu_model/
false
false
self
3
{'enabled': False, 'images': [{'id': 'aDTM33NC_RwQjSbCsYQdPHQSZnwO27TXeh0S3vx-2s4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aDTM33NC_RwQjSbCsYQdPHQSZnwO27TXeh0S3vx-2s4.png?width=108&crop=smart&auto=webp&s=d39f80b3ef397dbaa7f7208dbca68057d7f75121', 'width': 108}, {'height': 108, 'url': 'h...
Building a Claude/ChatGPT Projects-like system: How to implement persistent context with uploaded documents?
0
I want to build my own agent system similar to Claude Projects or ChatGPT Projects, where users can: * Upload documents that persist across conversations * Set custom instructions for the agent * Have the AI seamlessly reference uploaded materials **What I'm trying to replicate:** * Upload PDFs, docs, code files as ...
2025-07-12T15:36:39
https://www.reddit.com/r/LocalLLaMA/comments/1ly3dk9/building_a_claudechatgpt_projectslike_system_how/
Funny-Enthusiasm-610
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly3dk9
false
null
t3_1ly3dk9
/r/LocalLLaMA/comments/1ly3dk9/building_a_claudechatgpt_projectslike_system_how/
false
false
self
0
null
Anyone got lobe-chat-database working?
1
I was testing LobeChat on unraid docker and noticed that settings and chats don’t persist — once the browser is closed, everything’s lost. I wanted to try the `lobehub/lobe-chat-database` version to enable persistence with Postgres + MinIO, but I keep getting a 500 error. I believe the database and env variables are s...
2025-07-12T15:28:34
https://www.reddit.com/r/LocalLLaMA/comments/1ly36ht/anyone_got_lobechatdatabase_working/
reallionkiller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly36ht
false
null
t3_1ly36ht
/r/LocalLLaMA/comments/1ly36ht/anyone_got_lobechatdatabase_working/
false
false
self
1
null
Support for the LiquidAI LFM2 hybrid model family is now available in llama.cpp
24
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency. We're releasing the weights of three post-trained checkpoints with 350M, 700M, and 1.2B param...
2025-07-12T15:27:49
https://github.com/ggml-org/llama.cpp/pull/14620
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1ly35wd
false
null
t3_1ly35wd
/r/LocalLLaMA/comments/1ly35wd/support_for_the_liquidai_lfm2_hybrid_model_family/
false
false
default
24
{'enabled': False, 'images': [{'id': 'RAKCbtyHmD5AHPyuXInV3EboSSux75gI9ii4H-HnhzM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RAKCbtyHmD5AHPyuXInV3EboSSux75gI9ii4H-HnhzM.png?width=108&crop=smart&auto=webp&s=922d22ba99cb62791a87c7c03e0c75e9f2c263bf', 'width': 108}, {'height': 108, 'url': 'h...
Using llama3.2-vision:11b for UI element identification
2
Hello /r/LocalLLaMA Anyone had any success with using llama3.2-vision:11b to identity UI element from a screenshot? something like the following: - input screenshot - query: where is the back button? - output: (x,y, width, height)
2025-07-12T15:27:05
https://www.reddit.com/r/LocalLLaMA/comments/1ly358h/using_llama32vision11b_for_ui_element/
mjTheThird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly358h
false
null
t3_1ly358h
/r/LocalLLaMA/comments/1ly358h/using_llama32vision11b_for_ui_element/
false
false
self
2
null
support LiquidAI LFM2 hybrid model family available in llama.cpp
2
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency. We're releasing the weights of three post-trained checkpoints with 350M, 700M, and 1.2B param...
2025-07-12T15:25:12
https://github.com/ggml-org/llama.cpp/pull/14620
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1ly33ng
false
null
t3_1ly33ng
/r/LocalLLaMA/comments/1ly33ng/support_liquidai_lfm2_hybrid_model_family/
false
false
https://external-preview…1e53c6e9c8cf916b
2
{'enabled': False, 'images': [{'id': 'RAKCbtyHmD5AHPyuXInV3EboSSux75gI9ii4H-HnhzM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RAKCbtyHmD5AHPyuXInV3EboSSux75gI9ii4H-HnhzM.png?width=108&crop=smart&auto=webp&s=922d22ba99cb62791a87c7c03e0c75e9f2c263bf', 'width': 108}, {'height': 108, 'url': 'h...
How Are YOU Using LLMs? (A Quick Survey)
0
I'm usually around here enjoying the discussions, and I've put together a short, 5-7 minute survey to better understand how all of you are using Large Language Models locally. I'm really curious about your setups, the tools and agents you're using, and what your day-to-day experience is like on the ground. Before I ...
2025-07-12T14:44:13
https://www.reddit.com/r/LocalLLaMA/comments/1ly256a/how_are_you_using_llms_a_quick_survey/
kidupstart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly256a
false
null
t3_1ly256a
/r/LocalLLaMA/comments/1ly256a/how_are_you_using_llms_a_quick_survey/
false
false
self
0
null
AutoBE, No-code agent for Backend Application, writing 100% compilable code (Open Source)
0
AutoBE, No-code agent for Backend Application, writing 100% compilable code. - **GitHub Repository**: https://github.com/wrtnlabs/autobe - **Guide Documents**: https://wrtnlabs.io/autobe/docs - **Demo Result (Generated backend applications)** - Bulletin Board System: https://github.com/wrtnlabs/autobe-example-bbs ...
2025-07-12T14:13:16
https://v.redd.it/2mwgcxkdbgcf1
jhnam88
/r/LocalLLaMA/comments/1ly1g02/autobe_nocode_agent_for_backend_application/
1970-01-01T00:00:00
0
{}
1ly1g02
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2mwgcxkdbgcf1/DASHPlaylist.mpd?a=1755051203%2COWZiMzA5NDg1NzVlYWFiZDg0ZTYzNDhjMzJlMzAwYjg5MGQwMDVmZGY3NWE0MmY1OTdiYjRkYzZjZmRjNDdkMw%3D%3D&v=1&f=sd', 'duration': 877, 'fallback_url': 'https://v.redd.it/2mwgcxkdbgcf1/DASH_720.mp4?source=fallback', 'h...
t3_1ly1g02
/r/LocalLLaMA/comments/1ly1g02/autobe_nocode_agent_for_backend_application/
false
false
https://external-preview…5fb8f97b004a52ba
0
{'enabled': False, 'images': [{'id': 'cTJxYW54a2RiZ2NmMZU4z1ns0YguhnSerBikWRx-xW0PkW7LeDZSsrbnGGvR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cTJxYW54a2RiZ2NmMZU4z1ns0YguhnSerBikWRx-xW0PkW7LeDZSsrbnGGvR.png?width=108&crop=smart&format=pjpg&auto=webp&s=56adc6889b23d7de994637ca151c8dddafb0b...
Beginner's tip: How to fix the Jinja template error in LM Studio (in my case: for Mistral-qwq-12b-merge)
5
Yesterday, I downloaded this model: [https://huggingface.co/Disya/Mistral-qwq-12b-merge-gguf](https://huggingface.co/Disya/Mistral-qwq-12b-merge-gguf) after someone recommended it for erp in a comment. "A mix between mistral and qwq? Sounds intriguing, I want to give it a try." It loaded fine, but when I tried...
2025-07-12T14:09:46
https://www.reddit.com/r/LocalLLaMA/comments/1ly1d7v/beginners_tip_how_to_fix_the_jinja_template_error/
hugo-the-second
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly1d7v
false
null
t3_1ly1d7v
/r/LocalLLaMA/comments/1ly1d7v/beginners_tip_how_to_fix_the_jinja_template_error/
false
false
self
5
{'enabled': False, 'images': [{'id': 'Aqe7MuYuCs98rlQ_uLEiIYTmQfrGi3PwUr_uAeCI1c4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Aqe7MuYuCs98rlQ_uLEiIYTmQfrGi3PwUr_uAeCI1c4.png?width=108&crop=smart&auto=webp&s=6d5ce391ab8ff43b14cef3df7d11941afbacd910', 'width': 108}, {'height': 116, 'url': 'h...
Simplest way to run single batch jobs for experiments on determinism
6
I am doing research on determinism of LLM responses and want to run as the only job on the server but don't quite have the LLM ops skills to be confident in the backend setup. I currently use the standard hosted solutions (OpenAI and together.ai) and I assume that I am sharing input buffers/caches with other jobs whi...
2025-07-12T14:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1ly19br/simplest_way_to_run_single_batch_jobs_for/
Skiata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly19br
false
null
t3_1ly19br
/r/LocalLLaMA/comments/1ly19br/simplest_way_to_run_single_batch_jobs_for/
false
false
self
6
{'enabled': False, 'images': [{'id': 'jqi0odKJy6fJscofKk0VuwXRmAfkJMHhX47K7LhbRFo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jqi0odKJy6fJscofKk0VuwXRmAfkJMHhX47K7LhbRFo.jpeg?width=108&crop=smart&auto=webp&s=e83ed5374520087c4f2e15d5222be950ed938a6d', 'width': 108}, {'height': 108, 'url': '...
Grok4 consults with daddy on answers
0
2025-07-12T14:03:20
https://apnews.com/article/grok-4-elon-musk-xai-colossus-14d575fb490c2b679ed3111a1c83f857
throwawayacc201711
apnews.com
1970-01-01T00:00:00
0
{}
1ly182t
false
null
t3_1ly182t
/r/LocalLLaMA/comments/1ly182t/grok4_consults_with_daddy_on_answers/
false
false
default
0
{'enabled': False, 'images': [{'id': 'XIKzXUflHPuqjOItssqrAXFOcag8oXYxdjPMUzrlgkI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XIKzXUflHPuqjOItssqrAXFOcag8oXYxdjPMUzrlgkI.jpeg?width=108&crop=smart&auto=webp&s=e484e8eb03faedc41073355bee78085c8f7b60a2', 'width': 108}, {'height': 121, 'url': '...
Are there any builder companies that sell pre-assembled Blackwell 6000 machines?
2
Everytime I peek at a builders GPU options I feel I never see it go that high. Anyone ever hear of a reputable builder with that power?
2025-07-12T13:38:18
https://www.reddit.com/r/LocalLLaMA/comments/1ly0oln/are_there_any_builder_companies_that_sell/
richardanaya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly0oln
false
null
t3_1ly0oln
/r/LocalLLaMA/comments/1ly0oln/are_there_any_builder_companies_that_sell/
false
false
self
2
null
It's been a while, I'm out of date, suggest me a model
2
I have 32 GB of ram and a 4060 TI 16 GB. What's the best model I can run right now? Is there a website where you can just enter your specs and it spits out compatible models? What's the best local UI right now? LM Studio?
2025-07-12T13:31:36
https://www.reddit.com/r/LocalLLaMA/comments/1ly0jnx/its_been_a_while_im_out_of_date_suggest_me_a_model/
mmmm_frietjes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly0jnx
false
null
t3_1ly0jnx
/r/LocalLLaMA/comments/1ly0jnx/its_been_a_while_im_out_of_date_suggest_me_a_model/
false
false
self
2
null
Demo Video of AutoBE, No-code agent for Backend Application, writing 100% compilable code (Open Source)
2
AutoBE, No-code agent for Backend Application, writing 100% compilable code - **GitHub Repository**: https://github.com/wrtnlabs/autobe - **Guide Documents**: https://wrtnlabs.io/autobe/docs - **Demo Result (Generated backend applications)** - Bulletin Board System: https://github.com/wrtnlabs/autobe-example-bbs -...
2025-07-12T13:08:41
https://v.redd.it/9gosk8exzfcf1
jhnam88
/r/LocalLLaMA/comments/1ly02iv/demo_video_of_autobe_nocode_agent_for_backend/
1970-01-01T00:00:00
0
{}
1ly02iv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/9gosk8exzfcf1/DASHPlaylist.mpd?a=1755047329%2CMGFlMjRiMmRjMGVkOTE1MGQwODYwZjkyZjVjZjZmNDZmMGU3NjlmZjNlMjFhZjcwNWY5ZjI5ZmYyMDdhMjg2Zg%3D%3D&v=1&f=sd', 'duration': 877, 'fallback_url': 'https://v.redd.it/9gosk8exzfcf1/DASH_720.mp4?source=fallback', 'h...
t3_1ly02iv
/r/LocalLLaMA/comments/1ly02iv/demo_video_of_autobe_nocode_agent_for_backend/
false
false
https://external-preview…edae0329df76cc5c
2
{'enabled': False, 'images': [{'id': 'M2hubTNhZXh6ZmNmMZU4z1ns0YguhnSerBikWRx-xW0PkW7LeDZSsrbnGGvR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M2hubTNhZXh6ZmNmMZU4z1ns0YguhnSerBikWRx-xW0PkW7LeDZSsrbnGGvR.png?width=108&crop=smart&format=pjpg&auto=webp&s=d82cc46bf358c864562777e3dceeb44bb1f17...
Suggest a Suitable Ai Model to run locally ( beginner)
5
i want to run ai model locally , i have 8gb ram , 2gb vram , i5 8th gen , i want to test bit smaller to start , if i use 3b parameter model , what are perks of using and can i integrate it into my system , automating tasks and full personal assistant?
2025-07-12T12:47:59
https://www.reddit.com/r/LocalLLaMA/comments/1lxzn8c/suggest_a_suitable_ai_model_to_run_locally/
Spectre-i4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxzn8c
false
null
t3_1lxzn8c
/r/LocalLLaMA/comments/1lxzn8c/suggest_a_suitable_ai_model_to_run_locally/
false
false
self
5
null
What LLM Workflow UI Are You Using?
4
I just started experimenting with LLM workflow using n8n, and I built a workflow to improve the translation quality of my local LLM, sure it works but I found it lacking some basic functions, like I need to write JavaScript for some very basic things I'm not an professional AI workflow developer, I just want to impro...
2025-07-12T12:17:57
https://www.reddit.com/r/LocalLLaMA/comments/1lxz268/what_llm_workflow_ui_are_you_using/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxz268
false
null
t3_1lxz268
/r/LocalLLaMA/comments/1lxz268/what_llm_workflow_ui_are_you_using/
false
false
self
4
null
we have to delay it
2,722
2025-07-12T12:08:26
https://i.redd.it/oma34zdapfcf1.jpeg
ILoveMy2Balls
i.redd.it
1970-01-01T00:00:00
0
{}
1lxyvto
false
null
t3_1lxyvto
/r/LocalLLaMA/comments/1lxyvto/we_have_to_delay_it/
false
false
https://b.thumbs.redditm…eScEcpz6UlEE.jpg
2,722
{'enabled': True, 'images': [{'id': 'kv1bPWQK0QfjBY4lkNaJeVxIc-aQd3DEWwNTmfHkEhw', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/oma34zdapfcf1.jpeg?width=108&crop=smart&auto=webp&s=cc062ecd361f54588924802e9a8d113aeaaaa827', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/oma34zdapfcf1.jp...
we have to delay it
1
https://preview.redd.it/…layed by a week
2025-07-12T12:07:15
https://www.reddit.com/r/LocalLLaMA/comments/1lxyv25/we_have_to_delay_it/
ILoveMy2Balls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxyv25
false
null
t3_1lxyv25
/r/LocalLLaMA/comments/1lxyv25/we_have_to_delay_it/
false
false
https://a.thumbs.redditm…-6tLl7PvJoA0.jpg
1
null
"We will release o3 wieghts next week"
1,413
2025-07-12T11:48:49
https://v.redd.it/8iqku5brlfcf1
Qparadisee
v.redd.it
1970-01-01T00:00:00
0
{}
1lxyj92
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/8iqku5brlfcf1/DASHPlaylist.mpd?a=1754912945%2CN2E3OWY0NDQ1ODg3MWM3ZTcxNGFmZmE5ZjM0YzU1YmI2MGY2MzhlY2E0M2E3YzAxYmNkMWMyNTk4ZGEzZTY5NA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/8iqku5brlfcf1/DASH_360.mp4?source=fallback', 'has...
t3_1lxyj92
/r/LocalLLaMA/comments/1lxyj92/we_will_release_o3_wieghts_next_week/
false
false
https://external-preview…d964af1a0c494b31
1,413
{'enabled': False, 'images': [{'id': 'MHB1MWw1YnJsZmNmMdL5zV43Lc9tDShB7DOvm21L1LTW10tUK0LGB85rL2PQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MHB1MWw1YnJsZmNmMdL5zV43Lc9tDShB7DOvm21L1LTW10tUK0LGB85rL2PQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=6338a91401d613f89c37a6107a739c43ba939...
Have you tried that new devstral?! Myyy! The next 8x7b?
52
Been here since llama1 area.. what a crazy ride! Now we have that little devstral 2507. To me it feels as good as deepseek R1 the first but runs on dual 3090 ! (Ofc q8 with 45k ctx). Do you feel the same thing? Ho my.. open weights models won't be as fun without Mistral 🇨🇵 (To me it feels like 8x7b again but b...
2025-07-12T11:43:56
https://www.reddit.com/r/LocalLLaMA/comments/1lxyg6z/have_you_tried_that_new_devstral_myyy_the_next/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxyg6z
false
null
t3_1lxyg6z
/r/LocalLLaMA/comments/1lxyg6z/have_you_tried_that_new_devstral_myyy_the_next/
false
false
self
52
null
Safety first, or whatever🙄
175
2025-07-12T11:37:36
https://i.redd.it/idk5uvesjfcf1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1lxycdh
false
null
t3_1lxycdh
/r/LocalLLaMA/comments/1lxycdh/safety_first_or_whatever/
false
false
default
175
{'enabled': True, 'images': [{'id': 'idk5uvesjfcf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/idk5uvesjfcf1.jpeg?width=108&crop=smart&auto=webp&s=2cc3224e9e3248f9795321cf57fc214ae879cd51', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/idk5uvesjfcf1.jpeg?width=216&crop=smart&auto=...
What is your "perfect" £10,000 for Local LLM, Gaming, plex with the following conditional and context.
6
Hi all, I wanted to rewrite my question and put it as a discussion, in December I will be building/buying a computer to be a Home companion/nas/plex/gaming system, it will be running 24/7 and be part of a disabled person's (me) safe space and will be both a companion and entertainment. It will run PC games, Silly tave...
2025-07-12T11:36:42
https://www.reddit.com/r/LocalLLaMA/comments/1lxybu4/what_is_your_perfect_10000_for_local_llm_gaming/
Quebber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxybu4
false
null
t3_1lxybu4
/r/LocalLLaMA/comments/1lxybu4/what_is_your_perfect_10000_for_local_llm_gaming/
false
false
self
6
null
newbie here. Is this normal? Am I doing everything wrong? Am I asking too much? Gemma3 4b was transcribing ok with some mistakes
0
https://preview.redd.it/…2078aa99f0 hehe
2025-07-12T11:31:55
https://www.reddit.com/r/LocalLLaMA/comments/1lxy8xz/newbie_here_is_this_normal_am_i_doing_everything/
Super_Snowbro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxy8xz
false
null
t3_1lxy8xz
/r/LocalLLaMA/comments/1lxy8xz/newbie_here_is_this_normal_am_i_doing_everything/
false
false
https://b.thumbs.redditm…DCwbSPbko6_w.jpg
0
null
i’m building a platform where you can use your local gpus, rent remote gpus, or use co-op shared gpus. what is more important to you?
0
It is a difficult bit of UX to figure out and I didn’t want to go with what felt right to me. [View Poll](https://www.reddit.com/poll/1lxxgm2)
2025-07-12T10:43:39
https://www.reddit.com/r/LocalLLaMA/comments/1lxxgm2/im_building_a_platform_where_you_can_use_your/
okaris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxxgm2
false
null
t3_1lxxgm2
/r/LocalLLaMA/comments/1lxxgm2/im_building_a_platform_where_you_can_use_your/
false
false
self
0
null
Local LLM on laptop?
2
How bad are laptops for running LLM’s? I am going to get a laptop this August and would love to run a 5b-7B local LLM. How feasible is this?
2025-07-12T10:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1lxx4sb/local_llm_on_laptop/
ontologicalmemes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxx4sb
false
null
t3_1lxx4sb
/r/LocalLLaMA/comments/1lxx4sb/local_llm_on_laptop/
false
false
self
2
null
Rtx 5060ti 16gb vs Rtx 3090
4
Hey, I am an llm privacy researcher, I need a SFF build as my personal machine, that I plan to travel with and use to show live demonstrations to potential enterprise clients, will host an 8B llm plus some basic overheads like BERT The 5060ti is new, reliable ( i can buy for 450$ in my country) cheap and comes with w...
2025-07-12T09:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1lxwpqp/rtx_5060ti_16gb_vs_rtx_3090/
Alpine_Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxwpqp
false
null
t3_1lxwpqp
/r/LocalLLaMA/comments/1lxwpqp/rtx_5060ti_16gb_vs_rtx_3090/
false
false
self
4
null
Best setup for ~20 tokens/sec DeepSeek R1 671B Q8 w/ 128K context window
24
What am I looking at for something that can run DeepSeek R1 Q8 w/ full 128K context window? I know an Epyc setup can do this, I am not sure about if it can hit 20 tokens/second. I suspect it will need 1024G ram, potentially more? Anyone have a CPU system running full DeepSeek R1 (ideally Q8) at 20+ tokens/second? ...
2025-07-12T09:51:28
https://www.reddit.com/r/LocalLLaMA/comments/1lxwodv/best_setup_for_20_tokenssec_deepseek_r1_671b_q8_w/
MidnightProgrammer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxwodv
false
null
t3_1lxwodv
/r/LocalLLaMA/comments/1lxwodv/best_setup_for_20_tokenssec_deepseek_r1_671b_q8_w/
false
false
self
24
null
Performant open weights foundation text-specific models are where now?
3
I’m after a decently sized - by which I mean 50B+ parameters - text-focused foundation model I can fine-tune for a specific use case. I have the dataset, I have the hardware. What I don’t have is a suitable LLM to use as a base. Something like Llama 3.3-70b would be perfect, but that’s only being distributed as an inst...
2025-07-12T09:26:07
https://www.reddit.com/r/LocalLLaMA/comments/1lxwb4m/performant_open_weights_foundation_textspecific/
psychonomy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxwb4m
false
null
t3_1lxwb4m
/r/LocalLLaMA/comments/1lxwb4m/performant_open_weights_foundation_textspecific/
false
false
self
3
null
can i post about my project here
1
[removed]
2025-07-12T09:22:01
https://www.reddit.com/r/LocalLLaMA/comments/1lxw902/can_i_post_about_my_project_here/
Dapper-Deal-6689
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxw902
false
null
t3_1lxw902
/r/LocalLLaMA/comments/1lxw902/can_i_post_about_my_project_here/
false
false
self
1
null
New GPU 7900 XT vs 9070 XT where price difference is ~40 USD
3
Hi everyone I'm currently building a new rig to get my feet wet with LLMs. There is a sale where I live and these 2 GPUs are pretty much the same price with 9070 XT beeing \~40 USD more expensive. The trade off would be those 4GB VRAM extra on 7900 XT vs PCIE 5 on the newer 9070 XT. 7900 XTX is out of the question s...
2025-07-12T09:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1lxw7es/new_gpu_7900_xt_vs_9070_xt_where_price_difference/
restless_forever
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxw7es
false
null
t3_1lxw7es
/r/LocalLLaMA/comments/1lxw7es/new_gpu_7900_xt_vs_9070_xt_where_price_difference/
false
false
self
3
null
We built an open-source medical triage benchmark
112
Medical triage means determining whether symptoms require emergency care, urgent care, or can be managed with self-care. This matters because LLMs are increasingly becoming the "digital front door" for health concerns—replacing the instinct to just Google it. Getting triage wrong can be dangerous (missed emergencies) ...
2025-07-12T09:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1lxw3zz/we_built_an_opensource_medical_triage_benchmark/
Significant-Pair-275
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxw3zz
false
null
t3_1lxw3zz
/r/LocalLLaMA/comments/1lxw3zz/we_built_an_opensource_medical_triage_benchmark/
false
false
self
112
{'enabled': False, 'images': [{'id': 'YGRuXIPLJmfx-HMMUWxIo3PT1Eu1Kllj_TeA0JBYYtI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YGRuXIPLJmfx-HMMUWxIo3PT1Eu1Kllj_TeA0JBYYtI.png?width=108&crop=smart&auto=webp&s=b9c7e10a1a4f6aeffdd4ad9ec00fba71b13e9850', 'width': 108}, {'height': 108, 'url': 'h...
Traditional Data Science work is going to be back
41
I just checked the monthly LLM API costs at my firm, and it's insanely high. I don’t see this being sustainable for much longer. Eventually, senior management will realize it and start cutting down on these expenses. Companies will likely shift towards hosting smaller LLMs internally for agentic use cases instead of re...
2025-07-12T08:48:24
https://www.reddit.com/r/LocalLLaMA/comments/1lxvrjm/traditional_data_science_work_is_going_to_be_back/
Competitive_Push5407
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxvrjm
false
null
t3_1lxvrjm
/r/LocalLLaMA/comments/1lxvrjm/traditional_data_science_work_is_going_to_be_back/
false
false
self
41
null
5090 minimum power limit = 400W ?
3
Please tell if you can limit your 5090 down to 300W or below and the driver version. I think I've seen reports that it could be limited to 300W but now the lower limit is 400W, it seems that the Jacket is jacking with us.
2025-07-12T08:29:09
https://www.reddit.com/r/LocalLLaMA/comments/1lxvh5t/5090_minimum_power_limit_400w/
MelodicRecognition7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxvh5t
false
null
t3_1lxvh5t
/r/LocalLLaMA/comments/1lxvh5t/5090_minimum_power_limit_400w/
false
false
self
3
null
Qwen 3 Embeddings 0.6B faring really poorly inspite of high score on benchmarks
38
### Background & Brief Setup We need a robust intent/sentiment classification and RAG pipeline, for which we plan on using embeddings, for a latency sensitive consumer facing product. We are planning to deploy a small embedding model on a inference optimized GCE VM for the same. I am currently running TEI (by Hugging...
2025-07-12T08:25:00
https://www.reddit.com/r/LocalLLaMA/comments/1lxvf0j/qwen_3_embeddings_06b_faring_really_poorly/
i4858i
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxvf0j
false
null
t3_1lxvf0j
/r/LocalLLaMA/comments/1lxvf0j/qwen_3_embeddings_06b_faring_really_poorly/
false
false
self
38
null
What drives progress in newer LLMs?
22
I am assuming most LLMs today use more or less a similar architecture. I am also assuming the initial training data is mostly the same (i.e. books, wikipedia etc), and probably close to being exhausted already? So what would make a future major version of an LLM much better than the previous one? I get post training ...
2025-07-12T08:08:38
https://www.reddit.com/r/LocalLLaMA/comments/1lxv6a5/what_drives_progress_in_newer_llms/
cangaroo_hamam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxv6a5
false
null
t3_1lxv6a5
/r/LocalLLaMA/comments/1lxv6a5/what_drives_progress_in_newer_llms/
false
false
self
22
null
How does having a very long context window impact performance?
9
As per the title. I want to run a model for dnd, the plan is to use Gemma 3 27b and max out the context length so that the model can remember things. Once the context fills up, I plan to ask the model to summarise the session and paste it into a new instance to continue. I have tried it with Gemini 2.5 Pro and the meth...
2025-07-12T07:46:52
https://www.reddit.com/r/LocalLLaMA/comments/1lxuu5m/how_does_having_a_very_long_context_window_impact/
opoot_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxuu5m
false
null
t3_1lxuu5m
/r/LocalLLaMA/comments/1lxuu5m/how_does_having_a_very_long_context_window_impact/
false
false
self
9
null
Best model for M3 Max 96GB?
5
Hey there, I got an M3 Max 96GB, which model do you guys think is the best for my hardware? For context, I mostly do light coding and agentic workflows that use MCP for data analytics. Thanks!
2025-07-12T07:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1lxufzz/best_model_for_m3_max_96gb/
gaztrab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxufzz
false
null
t3_1lxufzz
/r/LocalLLaMA/comments/1lxufzz/best_model_for_m3_max_96gb/
false
false
self
5
null
Trying to use AI agent to play N-puzzle but the agent could only solve 8-puzzle but completely failed on 15-puzzle.
2
Hi everyone, I'm trying to write some simple demo which uses an AI agent to play N-puzzle. I envision that the AI would use: move\\\_up, move\\\_down, move\\\_right, move\\\_left to move the game state, and also a print\\\_state tool to print the current state. Here is my code: from pdb import set\_trace import os ...
2025-07-12T06:24:06
https://www.reddit.com/r/LocalLLaMA/comments/1lxtivp/trying_to_use_ai_agent_to_play_npuzzle_but_the/
CommunityOpposite645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxtivp
false
null
t3_1lxtivp
/r/LocalLLaMA/comments/1lxtivp/trying_to_use_ai_agent_to_play_npuzzle_but_the/
false
false
self
2
null
7/11 Update on Design Arena: Added Devstral, Qwen, and kimi-k2, Grok 4 struggling but coding model coming out later?
47
Read this post for [context](https://www.reddit.com/r/LocalLLaMA/comments/1lu7lsi/uiux_benchmark_update_and_response_more_models/). Here are some updates: 1. We've added a [changelog](https://www.designarena.ai/changelog) of when each model was added or deactivated from the arena. System prompts can be found in [metho...
2025-07-12T06:21:21
https://i.redd.it/y1r7gm6xydcf1.png
adviceguru25
i.redd.it
1970-01-01T00:00:00
0
{}
1lxth6s
false
null
t3_1lxth6s
/r/LocalLLaMA/comments/1lxth6s/711_update_on_design_arena_added_devstral_qwen/
false
false
default
47
{'enabled': True, 'images': [{'id': 'y1r7gm6xydcf1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/y1r7gm6xydcf1.png?width=108&crop=smart&auto=webp&s=6337a58a55ba1b51ed8a34aff7d5ea0f2f0b508e', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/y1r7gm6xydcf1.png?width=216&crop=smart&auto=web...
MBP M3 Max 36 GB Memory - what can I run?
0
Hey everyone! I didn’t specifically buy my MacBook Pro (M3 Max, 36GB unified memory) to run LLMs, but now that I’m working in tech, I’m curious what kinds of models I can realistically run locally. I know 36GB might be a bit limiting for some larger models, but I’d love to hear your experience or suggestions on what ...
2025-07-12T05:17:37
https://www.reddit.com/r/LocalLLaMA/comments/1lxseu8/mbp_m3_max_36_gb_memory_what_can_i_run/
DeepTarget8436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxseu8
false
null
t3_1lxseu8
/r/LocalLLaMA/comments/1lxseu8/mbp_m3_max_36_gb_memory_what_can_i_run/
false
false
self
0
null
Is there a way to sort models by download size in LM Studio?
0
Is there a way to sort models by download size in LM Studio? That's my first criterion for selecting a model.
2025-07-12T05:05:30
https://www.reddit.com/r/LocalLLaMA/comments/1lxs7c9/is_there_a_way_to_sort_models_by_download_size_in/
THenrich
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxs7c9
false
null
t3_1lxs7c9
/r/LocalLLaMA/comments/1lxs7c9/is_there_a_way_to_sort_models_by_download_size_in/
false
false
self
0
null
New local AI system planning stage need advice.
1
Hi all, In December I will be buying or putting together a new home for my AI assistant, up to now I've run home AI assistants on everything from a minisforum mini pc, full PC with a 7900xtx/3090/4090/4060ti/5060ti. This is a primary part of my treatment/companion/helper for Autism and other issues, I use it in gamin...
2025-07-12T04:54:51
https://www.reddit.com/r/LocalLLaMA/comments/1lxs0s0/new_local_ai_system_planning_stage_need_advice/
Quebber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxs0s0
false
null
t3_1lxs0s0
/r/LocalLLaMA/comments/1lxs0s0/new_local_ai_system_planning_stage_need_advice/
false
false
self
1
null
Semantic code search for local directory
10
Hi folks—just wanted to share something we’ve been working on. If you’ve tried using Claude Code or Gemini CLI for local projects, you’ve probably noticed it can only search with basic *grep*. That makes it hard to find things like a \`Crawler\` class when you’re searching for “scrape”. We built an open-source tool th...
2025-07-12T04:51:24
https://www.reddit.com/r/LocalLLaMA/comments/1lxryp4/semantic_code_search_for_local_directory/
codingjaguar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxryp4
false
null
t3_1lxryp4
/r/LocalLLaMA/comments/1lxryp4/semantic_code_search_for_local_directory/
false
false
self
10
{'enabled': False, 'images': [{'id': 'FVczwBNkvJlkikO1Xpgiwpl-jHdNZ7ONOTzzFENcvJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FVczwBNkvJlkikO1Xpgiwpl-jHdNZ7ONOTzzFENcvJs.png?width=108&crop=smart&auto=webp&s=7620ebbc02f57fa65d16be80949dfb1b7d4ba235', 'width': 108}, {'height': 108, 'url': 'h...
Kimi K2 q4km is here and also the instructions to run it locally with KTransformers 10-14tps
241
As a partner with Moonshot AI, we present you the q4km version of Kimi K2 and the instructions to run it with KTransformers. [KVCache-ai/Kimi-K2-Instruct-GGUF · Hugging Face](https://huggingface.co/KVCache-ai/Kimi-K2-Instruct-GGUF) [ktransformers/doc/en/Kimi-K2.md at main · kvcache-ai/ktransformers](https://github.c...
2025-07-12T04:06:23
https://huggingface.co/KVCache-ai/Kimi-K2-Instruct-GGUF
CombinationNo780
huggingface.co
1970-01-01T00:00:00
0
{}
1lxr5s3
false
null
t3_1lxr5s3
/r/LocalLLaMA/comments/1lxr5s3/kimi_k2_q4km_is_here_and_also_the_instructions_to/
false
false
https://external-preview…6f33bdcbd743562e
241
{'enabled': False, 'images': [{'id': '7um7XAkvHQRx2MF4TRo122daGoOYixRc6uShLTRN9Tw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7um7XAkvHQRx2MF4TRo122daGoOYixRc6uShLTRN9Tw.png?width=108&crop=smart&auto=webp&s=7e95d53ecb2c94b00d53ef66bf67cdceb012ec71', 'width': 108}, {'height': 116, 'url': 'h...
Kimie K2 q4km is here and also the instructions to run it with KTransformers
1
As a partner with Moonshot AI, we present you the q4km version of Kimi K2 and the instructions to run it with KTransformers. [https://huggingface.co/KVCache-ai/Kimi-K2-Instruct-GGUF](https://huggingface.co/KVCache-ai/Kimi-K2-Instruct-GGUF) [https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/Kimi-K2.md](http...
2025-07-12T04:04:14
https://huggingface.co/KVCache-ai/Kimi-K2-Instruct-GGUF
CombinationNo780
huggingface.co
1970-01-01T00:00:00
0
{}
1lxr4g4
false
null
t3_1lxr4g4
/r/LocalLLaMA/comments/1lxr4g4/kimie_k2_q4km_is_here_and_also_the_instructions/
false
false
https://external-preview…6f33bdcbd743562e
1
{'enabled': False, 'images': [{'id': '7um7XAkvHQRx2MF4TRo122daGoOYixRc6uShLTRN9Tw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7um7XAkvHQRx2MF4TRo122daGoOYixRc6uShLTRN9Tw.png?width=108&crop=smart&auto=webp&s=7e95d53ecb2c94b00d53ef66bf67cdceb012ec71', 'width': 108}, {'height': 116, 'url': 'h...
Will an 8gbvram laptop gpu add any value?
1
Im trying to sus out if getting a mid tier cpu and a 5050, or 4060 on a laptop with sodimm memory would be more advantageous than getting a ryzen 9 hx370 with lpddr5x 7500mhz. Would having 8gb vram from the gpu actually yield noticeable results over the igpu of the hx370 being able to leverage the ram? Both options w...
2025-07-12T03:33:31
https://www.reddit.com/r/LocalLLaMA/comments/1lxqk44/will_an_8gbvram_laptop_gpu_add_any_value/
plzdonforgetthisname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxqk44
false
null
t3_1lxqk44
/r/LocalLLaMA/comments/1lxqk44/will_an_8gbvram_laptop_gpu_add_any_value/
false
false
self
1
null
Écraser les limites de l’IA.
1
🔥 Jailbreak IA → Exploitez les failles, créez des contenus révolutionnaires. « L’innovation sans frontières https://payhip.com/b/1uU8B
2025-07-12T03:19:09
https://v.redd.it/livu63nb2dcf1
AdParty3160
/r/LocalLLaMA/comments/1lxqal1/écraser_les_limites_de_lia/
1970-01-01T00:00:00
0
{}
1lxqal1
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/livu63nb2dcf1/DASHPlaylist.mpd?a=1755011953%2CMDlkMjRlYzQyNGY2N2E0ZWUzZDgxZjg2NTAxNGM1NjZiYzk4YWY4Yzc4MTg0OTM2NjkyN2NhOWJhMTI4Yjk2Yw%3D%3D&v=1&f=sd', 'duration': 256, 'fallback_url': 'https://v.redd.it/livu63nb2dcf1/DASH_720.mp4?source=fallback', 'h...
t3_1lxqal1
/r/LocalLLaMA/comments/1lxqal1/écraser_les_limites_de_lia/
false
false
https://external-preview…48633adf4d1ebb25
1
{'enabled': False, 'images': [{'id': 'Ym94ZGowcWIyZGNmMa9h-_OskizjW36xw5ck4M1oJRfF6YAEF2w3jp5uZoIt', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Ym94ZGowcWIyZGNmMa9h-_OskizjW36xw5ck4M1oJRfF6YAEF2w3jp5uZoIt.png?width=108&crop=smart&format=pjpg&auto=webp&s=2e82f815c3b5b7711e7e4fc9c8741492679f...
Offline AI — Calling All Experts and Noobs
0
Im not sure what percentage of you all use a small size of ollama vs bigger versions and wanted some discourse/thoughts/advice In my mind the goal having a offline ai system is more about thriving and less about surviving. As this tech develops it’s going to start to become easier and easier to monetize from. The reas...
2025-07-12T02:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1lxpw2g/offline_ai_calling_all_experts_and_noobs/
ManagerAdditional374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxpw2g
false
null
t3_1lxpw2g
/r/LocalLLaMA/comments/1lxpw2g/offline_ai_calling_all_experts_and_noobs/
false
false
self
0
null
Need help with my interview ASAP
0
I've been assigned a task a company I've applied for, to finish in 2 days. It's an Agentic AI POC that is expected for me that will fulfill their requirements. Can someone please guide through defining the architecture?
2025-07-12T02:51:19
https://www.reddit.com/r/LocalLLaMA/comments/1lxps1s/need_help_with_my_interview_asap/
Sick__sock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxps1s
false
null
t3_1lxps1s
/r/LocalLLaMA/comments/1lxps1s/need_help_with_my_interview_asap/
false
false
self
0
null
Where that Unsloth Q0.01_K_M GGUF at?
610
2025-07-12T02:37:05
https://i.redd.it/e2em6rucvccf1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1lxpidc
false
null
t3_1lxpidc
/r/LocalLLaMA/comments/1lxpidc/where_that_unsloth_q001_k_m_gguf_at/
false
false
default
610
{'enabled': True, 'images': [{'id': 'e2em6rucvccf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/e2em6rucvccf1.jpeg?width=108&crop=smart&auto=webp&s=d086bd0581bd67e0ab1809820331054699c24205', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/e2em6rucvccf1.jpeg?width=216&crop=smart&auto=...
Simple Comparison: Kimi K2 vs. Gemini 1.5 Pro - HTML Output for Model Eval Insights
1
[removed]
2025-07-12T02:30:19
https://www.reddit.com/gallery/1lxpdtc
DataLearnerAI
reddit.com
1970-01-01T00:00:00
0
{}
1lxpdtc
false
null
t3_1lxpdtc
/r/LocalLLaMA/comments/1lxpdtc/simple_comparison_kimi_k2_vs_gemini_15_pro_html/
false
false
https://b.thumbs.redditm…k-vzC3DkJi3s.jpg
1
null
How to SFT diffusion large language model ?
8
I’m wondering if there’s any way to perform SFT (Supervised Fine-Tuning) on a diffusion-based large language model. If anyone has experience with this, could you please share your insights?
2025-07-12T02:11:48
https://www.reddit.com/r/LocalLLaMA/comments/1lxp144/how_to_sft_diffusion_large_language_model/
ProfessionalGuess884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxp144
false
null
t3_1lxp144
/r/LocalLLaMA/comments/1lxp144/how_to_sft_diffusion_large_language_model/
false
false
self
8
null
who would’ve thought: OpenAI’s open weight model delayed again.
1
2025-07-12T02:05:25
https://i.redd.it/mapmau6jpccf1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1lxowoc
false
null
t3_1lxowoc
/r/LocalLLaMA/comments/1lxowoc/who_wouldve_thought_openais_open_weight_model/
false
false
default
1
{'enabled': True, 'images': [{'id': 'mapmau6jpccf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/mapmau6jpccf1.png?width=108&crop=smart&auto=webp&s=19f4dada7f1b29cd5893f0579ceb3f146b225a55', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/mapmau6jpccf1.png?width=216&crop=smart&auto=web...
Why don’t we have a big torrent repo for open-source LLMs?
174
Why hasn’t anyone created a centralized repo or tracker that hosts torrents for popular open-source LLMs?
2025-07-12T01:32:11
https://www.reddit.com/r/LocalLLaMA/comments/1lxo8za/why_dont_we_have_a_big_torrent_repo_for/
somthing_tn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxo8za
false
null
t3_1lxo8za
/r/LocalLLaMA/comments/1lxo8za/why_dont_we_have_a_big_torrent_repo_for/
false
false
self
174
null
Kimi K2 is funny and great
161
I LOVE the way this model produces responses. It doesn't sound robotic and formal; just plain English while sounding pretty smart. Also has strong creativity in my tests. Here is a prompt I asked to K2 with search enabled from the site [kimi.com](http://kimi.com) >Bash Grok 4 and prove you're better than it. Answer:...
2025-07-12T01:21:10
https://www.reddit.com/r/LocalLLaMA/comments/1lxo0xc/kimi_k2_is_funny_and_great/
theskilled42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxo0xc
false
null
t3_1lxo0xc
/r/LocalLLaMA/comments/1lxo0xc/kimi_k2_is_funny_and_great/
false
false
self
161
null
OpenAI open-weight model delayed indefinitely
3
2025-07-12T01:19:59
https://i.redd.it/1f8g9z7jhccf1.png
aitookmyj0b
i.redd.it
1970-01-01T00:00:00
0
{}
1lxo005
false
null
t3_1lxo005
/r/LocalLLaMA/comments/1lxo005/openai_openweight_model_delayed_indefinitely/
false
false
https://a.thumbs.redditm…t2KRC5ZgjWs8.jpg
3
{'enabled': True, 'images': [{'id': 'YVnFq40mZ6sQXJhzfPWBGESZxNDWR04ZXUYM-R59_nQ', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/1f8g9z7jhccf1.png?width=108&crop=smart&auto=webp&s=3ffd022014da0d16c095591b6b6e00c3a37a0eb0', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/1f8g9z7jhccf1.png...
Does this mean it’s likely not gonna be open source?
281
What do you all think?
2025-07-12T01:15:34
https://i.redd.it/awwe19btgccf1.jpeg
I_will_delete_myself
i.redd.it
1970-01-01T00:00:00
0
{}
1lxnwtg
false
null
t3_1lxnwtg
/r/LocalLLaMA/comments/1lxnwtg/does_this_mean_its_likely_not_gonna_be_open_source/
false
false
default
281
{'enabled': True, 'images': [{'id': 'awwe19btgccf1', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/awwe19btgccf1.jpeg?width=108&crop=smart&auto=webp&s=68ab8e1bb0a94a9b5069614d9922822610960d87', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/awwe19btgccf1.jpeg?width=216&crop=smart&auto=...
Tinyllama on old Mediatek G80 android device
3
2025-07-12T01:09:52
https://i.redd.it/8r9ywamsfccf1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1lxnsmm
false
null
t3_1lxnsmm
/r/LocalLLaMA/comments/1lxnsmm/tinyllama_on_old_mediatek_g80_android_device/
false
false
default
3
{'enabled': True, 'images': [{'id': '8r9ywamsfccf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8r9ywamsfccf1.jpeg?width=108&crop=smart&auto=webp&s=cbc7dea412453f04fc27ef75ea3713ed4b9323e4', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8r9ywamsfccf1.jpeg?width=216&crop=smart&auto=...
OpenAI delays its open weight model again for "safety tests"
904
2025-07-12T01:09:38
https://i.redd.it/z5xvjxzefccf1.png
lyceras
i.redd.it
1970-01-01T00:00:00
0
{}
1lxnsh1
false
null
t3_1lxnsh1
/r/LocalLLaMA/comments/1lxnsh1/openai_delays_its_open_weight_model_again_for/
false
false
default
904
{'enabled': True, 'images': [{'id': 'z5xvjxzefccf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/z5xvjxzefccf1.png?width=108&crop=smart&auto=webp&s=de0e9d503cf6aeab73ce34032122c88cd39b8b53', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/z5xvjxzefccf1.png?width=216&crop=smart&auto=web...
Where local is lagging behind... Wish lists for the rest of 2025
13
It's a been a great 6 months to be using local AI as the performance delta has, on average, been very low for classic LLMs, with R1 typically being at or near SOTA, and smaller models consistently getting better and better benchmarks. However, the below are all things where there has been a surprising lag between ...
2025-07-12T00:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1lxn8ry/where_local_is_lagging_behind_wish_lists_for_the/
nomorebuttsplz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxn8ry
false
null
t3_1lxn8ry
/r/LocalLLaMA/comments/1lxn8ry/where_local_is_lagging_behind_wish_lists_for_the/
false
false
self
13
null
Thank you r/LocalLLaMA! Observer AI launches tonight! 🚀 I built the local open-source screen-watching tool you guys asked for.
404
**TL;DR:** The open-source tool that lets local LLMs watch your screen launches tonight! Thanks to your feedback, it now has a **1-command install (completely offline no certs to accept)**, supports **any OpenAI-compatible API**, and has **mobile support**. I'd love your feedback! Hey r/LocalLLaMA, You guys are so am...
2025-07-12T00:18:17
https://v.redd.it/ah6imcae6ccf1
Roy3838
v.redd.it
1970-01-01T00:00:00
0
{}
1lxmr2h
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ah6imcae6ccf1/DASHPlaylist.mpd?a=1754871511%2CODczMTcyNTNiZDVjOGUxMjAyZjU0ZDlhMDRjOWQ0ODgyOTVhM2VlODlkNThjZjNiZDFmZjYzOTFkOWE4MTdkYw%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/ah6imcae6ccf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lxmr2h
/r/LocalLLaMA/comments/1lxmr2h/thank_you_rlocalllama_observer_ai_launches/
false
false
https://external-preview…53da21784682ba56
404
{'enabled': False, 'images': [{'id': 'ZmM4bzB4ZGU2Y2NmMZhMJY7xahRuiOjw2oq-BMraDIRMdnw08UBcv5QQ2J3P', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZmM4bzB4ZGU2Y2NmMZhMJY7xahRuiOjw2oq-BMraDIRMdnw08UBcv5QQ2J3P.png?width=108&crop=smart&format=pjpg&auto=webp&s=1af6f910051ce44e44ccd9f339b15b4a7240c...
LiquidAI LFM2 Model Released
31
LiquidAI released their [LFM2 model family](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38), and support for it was just [merged into llama.cpp](https://github.com/ggml-org/llama.cpp/pull/14620) a few hours ago. I haven't yet tried it locally, but I was quite impressed by their online demo of...
2025-07-12T00:10:33
https://www.reddit.com/r/LocalLLaMA/comments/1lxmldq/liquidai_lfm2_model_released/
Federal-Effective879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxmldq
false
null
t3_1lxmldq
/r/LocalLLaMA/comments/1lxmldq/liquidai_lfm2_model_released/
false
false
self
31
{'enabled': False, 'images': [{'id': 'MWIqEyyR9FIzN2ecVcaRHrLuSUwcveuAc9n59LiL-QM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MWIqEyyR9FIzN2ecVcaRHrLuSUwcveuAc9n59LiL-QM.png?width=108&crop=smart&auto=webp&s=6e38a5b7b5025caff4f8bea478c3931e2a70363a', 'width': 108}, {'height': 116, 'url': 'h...
Gemma-3n prompts to uncensor?
6
Any good prompts to uncensor this model? It keeps reiterating its a harmless AI
2025-07-12T00:06:00
https://www.reddit.com/r/LocalLLaMA/comments/1lxmhx2/gemma3n_prompts_to_uncensor/
InsideYork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxmhx2
false
null
t3_1lxmhx2
/r/LocalLLaMA/comments/1lxmhx2/gemma3n_prompts_to_uncensor/
false
false
self
6
null
[D] Any limitations if you try to split your dataset and run full epochs
4
Hi so I am a student and I can't afford a cloud gpu to train my model so I thought to use kaggle. since kaggle has a limited storage in input and output (20gb in output) to save checkpoints I thought to split my whole dataset which is 400gb into subsets. I did it into 16gb subsets each. I just want to ask will it affec...
2025-07-11T23:21:51
https://www.reddit.com/r/LocalLLaMA/comments/1lxljco/d_any_limitations_if_you_try_to_split_your/
Empty-Investment-827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxljco
false
null
t3_1lxljco
/r/LocalLLaMA/comments/1lxljco/d_any_limitations_if_you_try_to_split_your/
false
false
self
4
null
Best Local Model for Snappy Conversations?
4
I'm a fan of LLaMA 3 70B and its Deepseek variants, but i find that local inference makes conversations way too laggy. What is the best model for fast inference, as of July 2025? I'm happy to use up to 48 gig of VRAM, but I'm mainly interested in a model that gives snappy replies. What model, and what size and qua...
2025-07-11T23:18:24
https://www.reddit.com/r/LocalLLaMA/comments/1lxlgjk/best_local_model_for_snappy_conversations/
Harvard_Med_USMLE267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxlgjk
false
null
t3_1lxlgjk
/r/LocalLLaMA/comments/1lxlgjk/best_local_model_for_snappy_conversations/
false
false
self
4
null
Manuel Interdit pour Dompter l’IA
1
[removed]
2025-07-11T23:10:02
https://v.redd.it/k5ortvxeubcf1
AdParty3160
v.redd.it
1970-01-01T00:00:00
0
{}
1lxl9wo
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/k5ortvxeubcf1/DASHPlaylist.mpd?a=1754867417%2CMTJmNTUzMmFiNmM4Yzc4NWNlNmMwMzBmZDM0ZTI1ODk4NzEzYTIxMGM1ZjA2MTEwMzVhMmFlNDkzNDg2ZWQyZA%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/k5ortvxeubcf1/DASH_720.mp4?source=fallback', 'ha...
t3_1lxl9wo
/r/LocalLLaMA/comments/1lxl9wo/manuel_interdit_pour_dompter_lia/
false
false
https://external-preview…ed28caf9956807a6
1
{'enabled': False, 'images': [{'id': 'eWpnbmwyemV1YmNmMdRT05qVDhfF0pSU-Kbh0ixbzXT2lvVyDdJgzUECk7jp', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eWpnbmwyemV1YmNmMdRT05qVDhfF0pSU-Kbh0ixbzXT2lvVyDdJgzUECk7jp.png?width=108&crop=smart&format=pjpg&auto=webp&s=cce1edd80c4109404c9e3f31224682780442...
OpenAI releasing a new open model will probably spike consumer GPU demand
0
The rumors are that OpenAI is releasing an open model next week- the first one since GPT-2. Now, OpenAI is the 1000lb gorilla in the room in terms of mindshare, so that means a lot of consumers are newly interested in running it locally. Even if it’s just 1% or less of the chatgpt userbase… that’s going to cause a bi...
2025-07-11T22:04:34
https://www.reddit.com/r/LocalLLaMA/comments/1lxjsp4/openai_releasing_a_new_open_model_will_probably/
DepthHour1669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxjsp4
false
null
t3_1lxjsp4
/r/LocalLLaMA/comments/1lxjsp4/openai_releasing_a_new_open_model_will_probably/
false
false
self
0
null
How are people doing the whole video captioning and understanding thing?
1
I’ve not found a single model that’s trained on video as input. Is this just some smart Cv2 algorithm design coupled with using a multimodal model? Or do there exist true video->text models that are close to SoTa and more importantly they’re open source. That sounds pretty difficult all things considered I mean you...
2025-07-11T21:32:42
https://www.reddit.com/r/LocalLLaMA/comments/1lxj1o0/how_are_people_doing_the_whole_video_captioning/
Lazy-Pattern-5171
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxj1o0
false
null
t3_1lxj1o0
/r/LocalLLaMA/comments/1lxj1o0/how_are_people_doing_the_whole_video_captioning/
false
false
self
1
null
Bypassing Meta's Llama Firewall: A Case Study in Prompt Injection Vulnerabilities
1
2025-07-11T20:35:38
https://medium.com/trendyol-tech/bypassing-metas-llama-firewall-a-case-study-in-prompt-injection-vulnerabilities-fb552b93412b
vitalikmuskk
medium.com
1970-01-01T00:00:00
0
{}
1lxhnjo
false
null
t3_1lxhnjo
/r/LocalLLaMA/comments/1lxhnjo/bypassing_metas_llama_firewall_a_case_study_in/
false
false
default
1
{'enabled': False, 'images': [{'id': 'xGvMrM-s03pMu7b3RZX_BCc5yxNzmYTx8cXEKr3773M', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/xGvMrM-s03pMu7b3RZX_BCc5yxNzmYTx8cXEKr3773M.png?width=108&crop=smart&auto=webp&s=08673139e4771af7a385e8e69567adc9296e7b82', 'width': 108}, {'height': 151, 'url': 'h...
Most energy efficient way to run Gemma 3 27b?
21
Hey all, What would be the most energy efficient (tokens per seconds does not matter, only tokens per watthours) to run Gemma 3 27b? A 3090 capped at 210watts gives 25 t/s - this is what I'm using now. I'm wondering if there is a more efficient alternative. Ryzen 395+ AI desktop version seems to be \~120 watts, and...
2025-07-11T20:31:02
https://www.reddit.com/r/LocalLLaMA/comments/1lxhjjn/most_energy_efficient_way_to_run_gemma_3_27b/
Extremely_Engaged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxhjjn
false
null
t3_1lxhjjn
/r/LocalLLaMA/comments/1lxhjjn/most_energy_efficient_way_to_run_gemma_3_27b/
false
false
self
21
null
How to set the Context Overflow Policy in LM Studio? Apparently they removed the option...
2
I'm using LM Studio to tinker with simple D&D-style games. My system prompt is probably lengthier than it should be, I set up so that you begin as a simple peasant and have a vague progression of events leading to slaying a dragon. Takes up about 30% of context to begin with, I can chat with it for a little while befor...
2025-07-11T20:30:39
https://www.reddit.com/r/LocalLLaMA/comments/1lxhj7h/how_to_set_the_context_overflow_policy_in_lm/
sporkyuncle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxhj7h
false
null
t3_1lxhj7h
/r/LocalLLaMA/comments/1lxhj7h/how_to_set_the_context_overflow_policy_in_lm/
false
false
self
2
{'enabled': False, 'images': [{'id': 'qmrmFWCHKYgnIHr96403JtEv6AsOJmhZlmVpYwJgS88', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/qmrmFWCHKYgnIHr96403JtEv6AsOJmhZlmVpYwJgS88.png?width=108&crop=smart&auto=webp&s=93c7ebc0cf57d0191ffa88a6c2f7aaed8f0cd14e', 'width': 108}, {'height': 138, 'url': 'h...
What's a setup for local voice translation?
5
Looking to translate my video using my own voice into different languages. A lot of services exist that works pretty well, but is there a set of local models that I can piece together to get it working on my own 4090.
2025-07-11T20:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1lxhcom/whats_a_setup_for_local_voice_translation/
Charuru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxhcom
false
null
t3_1lxhcom
/r/LocalLLaMA/comments/1lxhcom/whats_a_setup_for_local_voice_translation/
false
false
self
5
null
I built a GPT bot that my colleagues love and has a valuable real-world use case. Now I want to make it standalone & more broadly available. What’s the best way to do it?
0
TL DR: I need advice on how to build a standalone chat-bot for a niche industry, with a specialized knowledge base.  Are there any solid platforms or services out there that aren’t crazy expensive, and *actually* work? ===== So I am sure you all are sick of reading about a new AI chatbot entrepreneurship venture (as ...
2025-07-11T20:04:52
https://www.reddit.com/r/LocalLLaMA/comments/1lxgwgo/i_built_a_gpt_bot_that_my_colleagues_love_and_has/
Educational_Call_579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxgwgo
false
null
t3_1lxgwgo
/r/LocalLLaMA/comments/1lxgwgo/i_built_a_gpt_bot_that_my_colleagues_love_and_has/
false
false
self
0
null
Unrestrained AI Chat Companion?
1
I'm looking for creating my first AI chatbot. It needs to be like the other person in a roleplay setting. Which one should I go for? Which model? I'm currently using a laptop with RTX 5090 24GB, and Kobold CPP. I've tried Qwen 3.1 8b, Mythomax L2 13b, and Nous Hermes 2 Mistral 7b. It's important that the model is un...
2025-07-11T19:56:55
https://www.reddit.com/r/LocalLLaMA/comments/1lxgp5c/unrestrained_ai_chat_companion/
ChicoTallahassee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxgp5c
false
null
t3_1lxgp5c
/r/LocalLLaMA/comments/1lxgp5c/unrestrained_ai_chat_companion/
false
false
self
1
null
An alternative to semantic or benchmark-based routing: A preference-aligned router model
17
Hello everyone, I am one of the core maintainers of Arch (https://github.com/katanemo/archgw), an open-source proxy for LLMs written in Rust. A few days ago we launched Arch-Router (https://huggingface.co/katanemo/Arch-Router-1.5B) on HuggingFace, a 1.5B router model designed for preference-aligned routing (and of cour...
2025-07-11T19:53:20
https://i.redd.it/dji5sexqsacf1.png
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1lxgm02
false
null
t3_1lxgm02
/r/LocalLLaMA/comments/1lxgm02/an_alternative_to_semantic_or_benchmarkbased/
false
false
default
17
{'enabled': True, 'images': [{'id': 'dji5sexqsacf1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/dji5sexqsacf1.png?width=108&crop=smart&auto=webp&s=3002ab1e75571897ed7b3f677ff3e63c843da3b5', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/dji5sexqsacf1.png?width=216&crop=smart&auto=webp...
People with a Mac Studio 512G: what are you doing with it?
21
Sure, the full Deepseek R1 model loads, but the tokens per second are still way too slow to be useful. So I’m just curious: for those of you who spent $10K+ on that nice little box, what are you actually doing with it?
2025-07-11T19:48:54
https://www.reddit.com/r/LocalLLaMA/comments/1lxgi3j/people_with_a_mac_studio_512g_what_are_you_doing/
Dangerous-Yak3976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxgi3j
false
null
t3_1lxgi3j
/r/LocalLLaMA/comments/1lxgi3j/people_with_a_mac_studio_512g_what_are_you_doing/
false
false
self
21
null
Stanford's CS336 2025 (Language Modeling from Scratch) is now available on YouTube
208
[Here's the YouTube Playlist](https://www.youtube.com/playlist?list=PLoROMvodv4rOY23Y0BoGoBGgQ1zmU_MT_) [Here's the CS336 website with assignments, slides etc](https://stanford-cs336.github.io/spring2025/) I've been studying it for a week and it's one of the best courses on LLMs I've seen online. The assignments are ...
2025-07-11T19:41:07
https://www.reddit.com/r/LocalLLaMA/comments/1lxgb9q/stanfords_cs336_2025_language_modeling_from/
realmvp77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lxgb9q
false
null
t3_1lxgb9q
/r/LocalLLaMA/comments/1lxgb9q/stanfords_cs336_2025_language_modeling_from/
false
false
self
208
{'enabled': False, 'images': [{'id': '9XmReZR8sZe4EwJETS_bT_kZhCOn3jpR_yHrOPaaruc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9XmReZR8sZe4EwJETS_bT_kZhCOn3jpR_yHrOPaaruc.jpeg?width=108&crop=smart&auto=webp&s=1a73b19b73b083dd96c1d55121a321e063838715', 'width': 108}, {'height': 121, 'url': '...