title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
China no. 1!
73
2025-07-31T14:12:08
https://i.redd.it/s1g7byiow7gf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1me2o4z
false
null
t3_1me2o4z
/r/LocalLLaMA/comments/1me2o4z/china_no_1/
false
false
default
73
{'enabled': True, 'images': [{'id': 's1g7byiow7gf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/s1g7byiow7gf1.jpeg?width=108&crop=smart&auto=webp&s=b46171142a4279de1ff354a7559a1dc9bf5a884d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/s1g7byiow7gf1.jpeg?width=216&crop=smart&auto=...
CohereLabs/command-a-vision-07-2025 · Hugging Face
87
Cohere Labs Command A Vision is an open weights research release of a 112 billion parameter model optimized for enterprise image understanding tasks, while keeping a low compute footprint. Developed by: [Cohere](https://cohere.com/) and [Cohere Labs](https://cohere.com/research) * Point of Contact: [**Cohere Labs**](...
2025-07-31T14:12:03
https://huggingface.co/CohereLabs/command-a-vision-07-2025
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1me2o28
false
null
t3_1me2o28
/r/LocalLLaMA/comments/1me2o28/coherelabscommandavision072025_hugging_face/
false
false
default
87
{'enabled': False, 'images': [{'id': 'KSnKoHRzOtDVdgv4tkOqKzIXPL8-S-fhBqaAliU-gUw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KSnKoHRzOtDVdgv4tkOqKzIXPL8-S-fhBqaAliU-gUw.png?width=108&crop=smart&auto=webp&s=bf6e50905656297c173c03abdd389f18f39e7de0', 'width': 108}, {'height': 116, 'url': 'h...
Introducing Command A Vision: Multimodal AI Built for Business
52
HF Link: [https://huggingface.co/CohereLabs/command-a-vision-07-2025](https://huggingface.co/CohereLabs/command-a-vision-07-2025) Blogpost: [https://cohere.com/blog/command-a-vision](https://cohere.com/blog/command-a-vision)
2025-07-31T14:06:13
https://www.reddit.com/gallery/1me2iza
Dark_Fire_12
reddit.com
1970-01-01T00:00:00
0
{}
1me2iza
false
null
t3_1me2iza
/r/LocalLLaMA/comments/1me2iza/introducing_command_a_vision_multimodal_ai_built/
false
false
https://b.thumbs.redditm…lOch59IpboKE.jpg
52
null
CohereLabs/command-a-vision-07-2025 · Hugging Face
2
2025-07-31T14:02:54
https://huggingface.co/CohereLabs/command-a-vision-07-2025
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1me2fxw
false
null
t3_1me2fxw
/r/LocalLLaMA/comments/1me2fxw/coherelabscommandavision072025_hugging_face/
false
false
default
2
null
Looking for best cloud GPU provider
0
Hi, I am intrested in making videos from Wan 2.2 and other open source projects that can be connected with ComfyUI (I guess ComfyUI will be easiest to start, I am new to using open source projects), so looking for easy to deploy and learn platform that I can use to every hugging face / github projects. I found runpod...
2025-07-31T13:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1me20vl/looking_for_best_cloud_gpu_provider/
Natural-Analyst-2533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1me20vl
false
null
t3_1me20vl
/r/LocalLLaMA/comments/1me20vl/looking_for_best_cloud_gpu_provider/
false
false
self
0
null
🚀 We just open-sourced our AI-powered video translation tool — would love your feedback!
1
[removed]
2025-07-31T13:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1me1ykg/we_just_opensourced_our_aipowered_video/
New_Blueberry9858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1me1ykg
false
null
t3_1me1ykg
/r/LocalLLaMA/comments/1me1ykg/we_just_opensourced_our_aipowered_video/
false
false
https://b.thumbs.redditm…6hz7Qqfj-o2c.jpg
1
null
Llamacpp finished pr for ggml4.5
1
[https://github.com/ggml-org/llama.cpp/pull/14939](https://github.com/ggml-org/llama.cpp/pull/14939)
2025-07-31T13:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1me1mb7/llamacpp_finished_pr_for_ggml45/
Easy_Kitchen7819
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1me1mb7
false
null
t3_1me1mb7
/r/LocalLLaMA/comments/1me1mb7/llamacpp_finished_pr_for_ggml45/
false
false
self
1
null
stepfun-ai/step3 · Hugging Face
125
2025-07-31T13:25:04
https://huggingface.co/stepfun-ai/step3
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1me1i0c
false
null
t3_1me1i0c
/r/LocalLLaMA/comments/1me1i0c/stepfunaistep3_hugging_face/
false
false
default
125
{'enabled': False, 'images': [{'id': 'PByxBes8ZhS0GaNzaLsdD1cFy0LWtkBpScIt3kOY-nk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PByxBes8ZhS0GaNzaLsdD1cFy0LWtkBpScIt3kOY-nk.png?width=108&crop=smart&auto=webp&s=2b5eb1f88337c545253e9f56a79b85014a7240f9', 'width': 108}, {'height': 116, 'url': 'h...
qwen-30B success story
205
At work I spent better part of a day trying to debug a mysterious problem with an external RFID reader. I gave up after running in circles with ChatGPT for many hours and got a little further with Gemini but in the end I had to give up. Unfortunately I left for vacation immediately afterwards, leaving me frustrated and...
2025-07-31T13:24:25
https://www.reddit.com/r/LocalLLaMA/comments/1me1hh8/qwen30b_success_story/
ExplorerWhole5697
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1me1hh8
false
null
t3_1me1hh8
/r/LocalLLaMA/comments/1me1hh8/qwen30b_success_story/
false
false
self
205
null
Junyang Lin is drinking tea
253
2025-07-31T12:30:05
https://i.redd.it/s3pv80fee7gf1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1me095p
false
null
t3_1me095p
/r/LocalLLaMA/comments/1me095p/junyang_lin_is_drinking_tea/
false
false
default
253
{'enabled': True, 'images': [{'id': 's3pv80fee7gf1', 'resolutions': [{'height': 164, 'url': 'https://preview.redd.it/s3pv80fee7gf1.png?width=108&crop=smart&auto=webp&s=74d650e5db6942d5ec1e453e9e58d0b10b8ffe40', 'width': 108}, {'height': 329, 'url': 'https://preview.redd.it/s3pv80fee7gf1.png?width=216&crop=smart&auto=we...
Assistance needed
1
[removed]
2025-07-31T12:16:55
https://www.reddit.com/r/LocalLLaMA/comments/1mdzz2w/assistance_needed/
jakeoptions
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdzz2w
false
null
t3_1mdzz2w
/r/LocalLLaMA/comments/1mdzz2w/assistance_needed/
false
false
self
1
null
Qwen3-30B-A3B-2507-Q4_K_L Is the First Local Model to Solve the North Pole Walk Puzzle
88
For the longest time, I've been giving my models a traditional puzzle that all failed to pass without fail :D Not even the SOTA models provide the right answer. >The puzzle is as follows: "What's the right answer: Imagine standing at the North Pole of the Earth. Walk in any direction, in a straight line, for 1 km...
2025-07-31T12:15:05
https://www.reddit.com/r/LocalLLaMA/comments/1mdzxmv/qwen330ba3b2507q4_k_l_is_the_first_local_model_to/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdzxmv
false
null
t3_1mdzxmv
/r/LocalLLaMA/comments/1mdzxmv/qwen330ba3b2507q4_k_l_is_the_first_local_model_to/
false
false
https://b.thumbs.redditm…Lc5zwFJKZCxk.jpg
88
null
Hunyuan releases X-Omni, a unified discrete autoregressive model for both image and language modalities
87
🚀 We're excited to share our latest research on X-Omni: reinforcement learning makes discrete autoregressive image generative models great again, empowering a practical unified model for both image and language modality generation. Highlights: ✅ Unified Modeling Approach: A discrete autoregressive model handling ima...
2025-07-31T12:10:25
https://www.reddit.com/gallery/1mdzu08
ResearchCrafty1804
reddit.com
1970-01-01T00:00:00
0
{}
1mdzu08
false
null
t3_1mdzu08
/r/LocalLLaMA/comments/1mdzu08/hunyuan_releases_xomni_a_unified_discrete/
false
false
https://a.thumbs.redditm…pn8L_07vMWc4.jpg
87
null
https://x.com/autopoiesislab/status/1950755654471131450?t=JZ8AtogcUFhwgzoKTM67Jw&s=19
0
2025-07-31T11:18:31
https://i.redd.it/5c2txiap17gf1.png
Soggy-Ad-8708
i.redd.it
1970-01-01T00:00:00
0
{}
1mdytsk
false
null
t3_1mdytsk
/r/LocalLLaMA/comments/1mdytsk/httpsxcomautopoiesislabstatus1950755654471131450tj/
false
false
https://b.thumbs.redditm…28JepjM_J2lE.jpg
0
{'enabled': True, 'images': [{'id': 'RBrQzSq-XWPWFfd17eheYJYIhLLOXHNMQkNnrXvFiNE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/5c2txiap17gf1.png?width=108&crop=smart&auto=webp&s=a06f03b603b438c3b633740446a0b4bf9b53b644', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/5c2txiap17gf1.png...
Everyone from r/LocalLLama refreshing Hugging Face every 5 minutes today looking for GLM-4.5 GGUFs
434
2025-07-31T11:04:33
https://i.redd.it/f5iqhqp7z6gf1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1mdykfn
false
null
t3_1mdykfn
/r/LocalLLaMA/comments/1mdykfn/everyone_from_rlocalllama_refreshing_hugging_face/
false
false
default
434
{'enabled': True, 'images': [{'id': 'f5iqhqp7z6gf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/f5iqhqp7z6gf1.jpeg?width=108&crop=smart&auto=webp&s=7e3bc13cc7709787b1633c87ce4deec12ada0949', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/f5iqhqp7z6gf1.jpeg?width=216&crop=smart&auto=w...
What model would you recommend for my specs ?
3
RTX4090 i9 14900k 64GB DDR5 6000Mhz 2TB SSD PCIe5 I played a bit with the Qwen2.5 Coder 32B, but it felt very slow. Now i see lots of new models coming out. I would want to use it in VS Code + Cline for coding, something that offsets some of the easier tasks so i don't get to pay a lot for the cloud models API...
2025-07-31T10:46:13
https://www.reddit.com/r/LocalLLaMA/comments/1mdy8f8/what_model_would_you_recommend_for_my_specs/
Alywan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdy8f8
false
null
t3_1mdy8f8
/r/LocalLLaMA/comments/1mdy8f8/what_model_would_you_recommend_for_my_specs/
false
false
self
3
null
Jan now runs fully on llama.cpp & auto-updates the backend
200
Hi, it's Emre from the Jan team. Jan v0.6.6 is out. Over the past few weeks we've ripped out Cortex, the backend layer on top of llama.cpp. It's finally gone, every local model now runs directly on llama.cpp. Plus, you can switch to any llama.cpp build under Settings, Model Providers, llama.cpp (see the video above)....
2025-07-31T10:34:34
https://v.redd.it/6tdds5rcr6gf1
eck72
v.redd.it
1970-01-01T00:00:00
0
{}
1mdy1at
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6tdds5rcr6gf1/DASHPlaylist.mpd?a=1756550089%2CNjE3NzZlMWQ5ODk2MTdlMzk2Njk5OWJjOTBjZTg3NzBlOTU1OGNkNTdhMzFkNTZhMDY5ZGIyZWY2N2E2ZDM0NQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/6tdds5rcr6gf1/DASH_1080.mp4?source=fallback', 'ha...
t3_1mdy1at
/r/LocalLLaMA/comments/1mdy1at/jan_now_runs_fully_on_llamacpp_autoupdates_the/
false
false
https://external-preview…d5f1d929e7abeba3
200
{'enabled': False, 'images': [{'id': 'OThqM3A3cmNyNmdmMarVaHVhDy4CK4NoO0kgn6HbxLEdRYxLZuUtk8wS5NEb', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/OThqM3A3cmNyNmdmMarVaHVhDy4CK4NoO0kgn6HbxLEdRYxLZuUtk8wS5NEb.png?width=108&crop=smart&format=pjpg&auto=webp&s=f859c3bb7426a18c330ce87e3736a28dafc09...
Jan now runs fully on llama.cpp, auto-updates the backend, and lets you switch between llama.cpp builds
3
Hi, it’s Emre from the Jan team. Jan v0.6.6 is out. Over the past few weeks we've ripped out Cortex, the backend layer on top of llama.cpp. It's finally gone, every local model now runs directly on llama.cpp. We removed Cortex because it was adding an extra hop and maintenance overhead. Folding its logic into Jan cu...
2025-07-31T10:12:36
https://v.redd.it/ov03mfprc6gf1
eck72
v.redd.it
1970-01-01T00:00:00
0
{}
1mdxo1x
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ov03mfprc6gf1/DASHPlaylist.mpd?a=1756548769%2CMDY5YTVlOTdhMmZlMGU5MzRlMTEwN2VmNmRmMDkxYzRmYmVjMjY0OTc4OTEwNTVhMWVhMDM4N2MwZDUxZjBmOA%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/ov03mfprc6gf1/DASH_1080.mp4?source=fallback', 'ha...
t3_1mdxo1x
/r/LocalLLaMA/comments/1mdxo1x/jan_now_runs_fully_on_llamacpp_autoupdates_the/
false
false
https://external-preview…4e65f6d75bc13409
3
{'enabled': False, 'images': [{'id': 'bml2OXN1cXJjNmdmMarVaHVhDy4CK4NoO0kgn6HbxLEdRYxLZuUtk8wS5NEb', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/bml2OXN1cXJjNmdmMarVaHVhDy4CK4NoO0kgn6HbxLEdRYxLZuUtk8wS5NEb.png?width=108&crop=smart&format=pjpg&auto=webp&s=73753f5c6dbbfd9414127dee6f472f3285eac...
DeepDrone, an open source CLI agent like Claude Code to fly your drone
7
I made a major update to deep drone, so it now is a CLI agent that controls your drone. It can use models with an api key and also use Ollama. Here is the demo below. And the source code : [https://github.com/evangelosmeklis/deepdrone](https://github.com/evangelosmeklis/deepdrone) https://reddit.com/link/1mdxihp/video...
2025-07-31T10:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1mdxihp/deepdrone_an_open_source_cli_agent_like_claude/
_twelvechess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdxihp
false
null
t3_1mdxihp
/r/LocalLLaMA/comments/1mdxihp/deepdrone_an_open_source_cli_agent_like_claude/
false
false
https://external-preview…16cadedf3aa29aa5
7
{'enabled': False, 'images': [{'id': 'D05ayFweRYxXp8MVhd-qyiGHFhlwyPzJP4dBla8AObI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D05ayFweRYxXp8MVhd-qyiGHFhlwyPzJP4dBla8AObI.png?width=108&crop=smart&auto=webp&s=e68c38f87cdcc748537b94eed5d21fe9bc1588d8', 'width': 108}, {'height': 108, 'url': 'h...
AMD Is Reportedly Looking to Introduce a Dedicated Discrete NPU, Similar to Gaming GPUs But Targeted Towards AI Performance On PCs; Taking Edge AI to New Levels
312
2025-07-31T09:41:46
https://wccftech.com/amd-is-looking-toward-introducing-a-dedicated-discrete-npu-similar-to-gaming-gpus/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1mdx65u
false
null
t3_1mdx65u
/r/LocalLLaMA/comments/1mdx65u/amd_is_reportedly_looking_to_introduce_a/
false
false
default
312
null
We’re building a devboard that runs Whisper, YOLO, and TinyLlama — locally, no cloud. Want to try it before we launch?
4
Hey folks, I’m building an affordable, plug-and-play AI devboard kind of like a “Raspberry Pi for AI”designed to run models like TinyLlama, Whisper, and YOLO locally, without cloud dependencies. It’s meant for developers, makers, educators, and startups who want to: • Run local LLMs and vision models on the edge • ...
2025-07-31T09:37:58
https://www.reddit.com/r/LocalLLaMA/comments/1mdx40b/were_building_a_devboard_that_runs_whisper_yolo/
aero917
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdx40b
false
null
t3_1mdx40b
/r/LocalLLaMA/comments/1mdx40b/were_building_a_devboard_that_runs_whisper_yolo/
false
false
self
4
null
Why has there been no opensource community for AI audio like with the LL and SD sub?
1
[removed]
2025-07-31T09:27:55
https://www.reddit.com/r/LocalLLaMA/comments/1mdwyl7/why_has_there_been_no_opensource_community_for_ai/
FpRhGf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwyl7
false
null
t3_1mdwyl7
/r/LocalLLaMA/comments/1mdwyl7/why_has_there_been_no_opensource_community_for_ai/
false
false
self
1
null
Why has there been no opensource AI subreddit for audio gen like LL and SD?
1
[removed]
2025-07-31T09:24:07
https://www.reddit.com/r/LocalLLaMA/comments/1mdwwle/why_has_there_been_no_opensource_ai_subreddit_for/
FpRhGf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwwle
false
null
t3_1mdwwle
/r/LocalLLaMA/comments/1mdwwle/why_has_there_been_no_opensource_ai_subreddit_for/
false
false
self
1
null
Best local model for Claude-like agentic behavior on 3×3090 rig?
5
Hi all, I’m setting up my system to run large language models locally and would really appreciate recommendations. I haven’t tried any models yet — my goal is to move away from cloud LLMs like Claude (mainly for coding , reasoning, and tool use), and run everything locally. My setup: • Ubuntu • AMD Threadripper 79...
2025-07-31T09:21:23
https://www.reddit.com/r/LocalLLaMA/comments/1mdwv4f/best_local_model_for_claudelike_agentic_behavior/
CryptographerLow7817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwv4f
false
null
t3_1mdwv4f
/r/LocalLLaMA/comments/1mdwv4f/best_local_model_for_claudelike_agentic_behavior/
false
false
self
5
null
Time to update this subreddit description?
1
[removed]
2025-07-31T09:21:16
[deleted]
1970-01-01T00:00:00
0
{}
1mdwv2d
false
null
t3_1mdwv2d
/r/LocalLLaMA/comments/1mdwv2d/time_to_update_this_subreddit_description/
false
false
default
1
null
Time to update this subreddit description?
1
[removed]
2025-07-31T09:20:44
[deleted]
1970-01-01T00:00:00
0
{}
1mdwurr
false
null
t3_1mdwurr
/r/LocalLLaMA/comments/1mdwurr/time_to_update_this_subreddit_description/
false
false
default
1
null
How do people engage with open source AI?
0
I’m doing preliminary research on open source (and open weight) AI for my uni and I was wondering, how do most people actually engage with released models? Is it mainly to run inference? Do most people run models locally? Are people fine-tuning models themselves, or is that rarely ever the case? Additionally, when com...
2025-07-31T09:20:29
https://www.reddit.com/r/LocalLLaMA/comments/1mdwums/how_do_people_engage_with_open_source_ai/
EstusFlaskCrochet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwums
false
null
t3_1mdwums
/r/LocalLLaMA/comments/1mdwums/how_do_people_engage_with_open_source_ai/
false
false
self
0
null
Time to update this subreddit description?
1
2025-07-31T09:19:37
https://i.redd.it/yjekdjigg6gf1.png
Beautiful-Essay1945
i.redd.it
1970-01-01T00:00:00
0
{}
1mdwu4q
false
null
t3_1mdwu4q
/r/LocalLLaMA/comments/1mdwu4q/time_to_update_this_subreddit_description/
false
false
default
1
{'enabled': True, 'images': [{'id': 'yjekdjigg6gf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/yjekdjigg6gf1.png?width=108&crop=smart&auto=webp&s=e68d2028725e4f65bfc1cb0aa5f6e830b1780778', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/yjekdjigg6gf1.png?width=216&crop=smart&auto=web...
Ollama with Qwen2.5VL:3B – The Doom II of VLMs
0
A model that can extract text with surprisingly good quality and decent speed — even on an 8GB RAM, CPU-only machine. I've been looking for a way to extract text on a low-spec computer for a while now. After trying many solutions, I'm honestly impressed by what this \~3GB model can do. It's like the Doom II of vision-...
2025-07-31T09:19:25
https://www.reddit.com/r/LocalLLaMA/comments/1mdwu18/ollama_with_qwen25vl3b_the_doom_ii_of_vlms/
ML-Future
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwu18
false
null
t3_1mdwu18
/r/LocalLLaMA/comments/1mdwu18/ollama_with_qwen25vl3b_the_doom_ii_of_vlms/
false
false
self
0
null
Time to update this subreddit description? >.<
1
2025-07-31T09:19:03
https://i.redd.it/0pd6shn0g6gf1.png
Beautiful-Essay1945
i.redd.it
1970-01-01T00:00:00
0
{}
1mdwtun
false
null
t3_1mdwtun
/r/LocalLLaMA/comments/1mdwtun/time_to_update_this_subreddit_description/
false
false
default
1
{'enabled': True, 'images': [{'id': '0pd6shn0g6gf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/0pd6shn0g6gf1.png?width=108&crop=smart&auto=webp&s=e5be25069aa08ec768b451e2a914993a84e8e821', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/0pd6shn0g6gf1.png?width=216&crop=smart&auto=web...
rednote-hilab/dots.ocr - Multilingual document layout parsing in a single vision-language model achieving SOTA performance despite compact 1.7B LLM foundation
52
2025-07-31T09:07:08
https://huggingface.co/rednote-hilab/dots.ocr
nullmove
huggingface.co
1970-01-01T00:00:00
0
{}
1mdwngf
false
null
t3_1mdwngf
/r/LocalLLaMA/comments/1mdwngf/rednotehilabdotsocr_multilingual_document_layout/
false
false
https://external-preview…c8663ec4b92f5a3c
52
{'enabled': False, 'images': [{'id': '8NDRsizKorORFhKFDygayRrW6cfTqRcK_E46LDgaFmo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8NDRsizKorORFhKFDygayRrW6cfTqRcK_E46LDgaFmo.png?width=108&crop=smart&auto=webp&s=014ce09ab614e86be0bda115d3ee826dd4c7e72b', 'width': 108}, {'height': 116, 'url': 'h...
Falcon-H1 technical report release
48
[https://huggingface.co/papers/2507.22448](https://huggingface.co/papers/2507.22448) Current framework support includes Hugging Face, vLLM, llama.cpp, Llama-Factory, Axolotl, OUMI, SkyPilot, etc. — with more on the way! https://preview.redd.it/vog1eu4gd6gf1.png?width=1708&format=png&auto=webp&s=80753458ee6e8869540d1...
2025-07-31T09:05:28
https://www.reddit.com/r/LocalLLaMA/comments/1mdwmju/falconh1_technical_report_release/
JingweiZUO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwmju
false
null
t3_1mdwmju
/r/LocalLLaMA/comments/1mdwmju/falconh1_technical_report_release/
false
false
https://external-preview…eaf01c514d1388ad
48
{'enabled': False, 'images': [{'id': 'ifoGNEtsOnQI7mOVCHlAOV6hOXRc2zUDtsZ8X9LgS5A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ifoGNEtsOnQI7mOVCHlAOV6hOXRc2zUDtsZ8X9LgS5A.png?width=108&crop=smart&auto=webp&s=5160807d254f5c616e61b4d003b92b90330ec05c', 'width': 108}, {'height': 116, 'url': 'h...
how much ram [cpu] do you have
0
[View Poll](https://www.reddit.com/poll/1mdwm49)
2025-07-31T09:04:40
https://www.reddit.com/r/LocalLLaMA/comments/1mdwm49/how_much_ram_cpu_do_you_have/
okaris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwm49
false
null
t3_1mdwm49
/r/LocalLLaMA/comments/1mdwm49/how_much_ram_cpu_do_you_have/
false
false
self
0
null
rednote-hilab/dots.ocr (1.7B) - Multilingual document layout parsing in a single vision-language model achieving SOTA performance
1
2025-07-31T09:01:59
https://huggingface.co/rednote-hilab/dots.ocr
nullmove
huggingface.co
1970-01-01T00:00:00
0
{}
1mdwkky
false
null
t3_1mdwkky
/r/LocalLLaMA/comments/1mdwkky/rednotehilabdotsocr_17b_multilingual_document/
false
false
default
1
null
How can you turn off reasoning for certain tasks in GLM 4.5?
6
With Qwen, you could add something to the prompt to turn off reasoning. Can you do the same with GLM 4.5?
2025-07-31T08:55:58
https://www.reddit.com/r/LocalLLaMA/comments/1mdwh31/how_can_you_turn_off_reasoning_for_certain_tasks/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdwh31
false
null
t3_1mdwh31
/r/LocalLLaMA/comments/1mdwh31/how_can_you_turn_off_reasoning_for_certain_tasks/
false
false
self
6
null
How to future proof fine tuning and/or training
2
This questions has been bothering me for a while and has prevented me from ”investing” on training and fine tuning a model since the next big thing is just around the corner. Maybe there’s a simple solution to this that I’m missing but: First problem: How do you choose which open source model to fine-tune or further ...
2025-07-31T08:39:05
https://www.reddit.com/r/LocalLLaMA/comments/1mdw7v7/how_to_future_proof_fine_tuning_andor_training/
AI-On-A-Dime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdw7v7
false
null
t3_1mdw7v7
/r/LocalLLaMA/comments/1mdw7v7/how_to_future_proof_fine_tuning_andor_training/
false
false
self
2
null
An Awesome-local-LLM repository
0
Hi! About two months ago, I decided to delve deeper into the topic of running LLMs locally. I was looking for an awesome-style repository. There are a few of them, but unfortunately, they are not actively maintained. So, I decided to create my own cheat sheet where I would take notes. After these few weeks, I can say t...
2025-07-31T08:27:50
https://github.com/rafska/Awesome-local-LLM
What_to_type_here
github.com
1970-01-01T00:00:00
0
{}
1mdw1l4
false
null
t3_1mdw1l4
/r/LocalLLaMA/comments/1mdw1l4/an_awesomelocalllm_repository/
false
false
https://external-preview…b1b2f47c5106150d
0
{'enabled': False, 'images': [{'id': 'Nz4v7a5FyPh-rW-HmnF3rzX5meoV5UtYQXzh1t8n0UM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nz4v7a5FyPh-rW-HmnF3rzX5meoV5UtYQXzh1t8n0UM.png?width=108&crop=smart&auto=webp&s=e496b6732d4545c96f97bcaf40bcef8cefdf7bfe', 'width': 108}, {'height': 108, 'url': 'h...
ik_llama.cpp and Qwen 3 30B-A3B architecture.
21
Big shout out to ikawrakow and his [https://github.com/ikawrakow/ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) for making my hardware relevant (and obviously Qwen team!) :) Looking forward to trying Thinker and Coder versions of this architecture! https://preview.redd.it/x2m1wj4i16gf1.png?width=2196&forma...
2025-07-31T07:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1mdvkhz/ik_llamacpp_and_qwen_3_30ba3b_architecture/
Bycbka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdvkhz
false
null
t3_1mdvkhz
/r/LocalLLaMA/comments/1mdvkhz/ik_llamacpp_and_qwen_3_30ba3b_architecture/
false
false
self
21
{'enabled': False, 'images': [{'id': '2UbIzGryv92r-OTNNbwj3X7DPvZqNJtHJ_N32Ju1bQs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2UbIzGryv92r-OTNNbwj3X7DPvZqNJtHJ_N32Ju1bQs.png?width=108&crop=smart&auto=webp&s=e0ca996c64f35d96d82c792f292d1574156f28a8', 'width': 108}, {'height': 108, 'url': 'h...
DevOps position for AI / LLMs
1
Hey everyone! The [German Aerospace Center](https://www.dlr.de/en) (DLR — the German NASA) is looking for someone for a [DevOps position](https://jobs.dlr.de/default/job/Informatikerin-%28mwd%29-als-DevOps-Engineer-f%C3%BCr-den-Betrieb-und-die-Entwicklung-von-KI-Anwendung/2484-de_DE) in the LLM field. You’ll need to be...
2025-07-31T07:54:57
https://www.reddit.com/r/LocalLLaMA/comments/1mdvj52/devops_position_for_ai_llms/
SommerEngineering
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdvj52
false
null
t3_1mdvj52
/r/LocalLLaMA/comments/1mdvj52/devops_position_for_ai_llms/
false
false
self
1
null
Ollama’s new app — Ollama 0.10 is here for macOS and Windows!
37
Download on ollama.com/download or GitHub releases https://github.com/ollama/ollama/releases/tag/v0.10.0 Blog post: [Ollama's new app](https://ollama.com/blog/new-app)
2025-07-31T07:52:40
https://i.redd.it/9wfl7u6z06gf1.jpeg
bllshrfv
i.redd.it
1970-01-01T00:00:00
0
{}
1mdvhxg
false
null
t3_1mdvhxg
/r/LocalLLaMA/comments/1mdvhxg/ollamas_new_app_ollama_010_is_here_for_macos_and/
false
false
default
37
{'enabled': True, 'images': [{'id': '9wfl7u6z06gf1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/9wfl7u6z06gf1.jpeg?width=108&crop=smart&auto=webp&s=dd49534e752553e996786ccf873670e3e86ffda7', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/9wfl7u6z06gf1.jpeg?width=216&crop=smart&auto=w...
CPU-only inference of Qwen 3 30B-A3B-2507-Instruct with ik_llama.cpp
1
[removed]
2025-07-31T07:48:55
https://www.reddit.com/r/LocalLLaMA/comments/1mdvfvh/cpuonly_inference_of_qwen_3_30ba3b2507instruct/
Bycbka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdvfvh
false
null
t3_1mdvfvh
/r/LocalLLaMA/comments/1mdvfvh/cpuonly_inference_of_qwen_3_30ba3b2507instruct/
false
false
self
1
{'enabled': False, 'images': [{'id': '2UbIzGryv92r-OTNNbwj3X7DPvZqNJtHJ_N32Ju1bQs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2UbIzGryv92r-OTNNbwj3X7DPvZqNJtHJ_N32Ju1bQs.png?width=108&crop=smart&auto=webp&s=e0ca996c64f35d96d82c792f292d1574156f28a8', 'width': 108}, {'height': 108, 'url': 'h...
CPU-only inference of Qwen 3 30B-A3B-2507-Instruct with ik_llama.cpp
1
[removed]
2025-07-31T07:45:49
https://www.reddit.com/r/LocalLLaMA/comments/1mdve4a/cpuonly_inference_of_qwen_3_30ba3b2507instruct/
Bycbka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdve4a
false
null
t3_1mdve4a
/r/LocalLLaMA/comments/1mdve4a/cpuonly_inference_of_qwen_3_30ba3b2507instruct/
false
false
https://b.thumbs.redditm…6Y66E9mm4b5Q.jpg
1
null
cogito v2 preview models released 70B/109B/405B/671B
140
The Cogito v2 LLMs are instruction tuned generative models. All models are released under an open license for commercial use. * Cogito v2 models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models). * The LLMs are trained using **Iterated ...
2025-07-31T07:30:57
https://www.reddit.com/r/LocalLLaMA/comments/1mdv67j/cogito_v2_preview_models_released_70b109b405b671b/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdv67j
false
null
t3_1mdv67j
/r/LocalLLaMA/comments/1mdv67j/cogito_v2_preview_models_released_70b109b405b671b/
false
false
self
140
{'enabled': False, 'images': [{'id': '7dnFXllcXlnatOfqO_F3iSqS3FlJPQP-Q1pGksTJzbw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7dnFXllcXlnatOfqO_F3iSqS3FlJPQP-Q1pGksTJzbw.png?width=108&crop=smart&auto=webp&s=2055e09c12c8dcc4a48b580d498877c964511989', 'width': 108}, {'height': 116, 'url': 'h...
Lightweight ChatGPT Client Using Your Own API Key (Pure HTML)
3
This is a simple interface built with pure HTML, JavaScript, and CSS to interact with ChatGPT using your own API key. It can be run directly in your web browser and it supports the classical GPT Models that the API let you interact with. [Example of a prompt](https://preview.redd.it/czusk8r0o5gf1.png?width=1917&format...
2025-07-31T07:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1mduvcv/lightweight_chatgpt_client_using_your_own_api_key/
_Nix_User
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mduvcv
false
null
t3_1mduvcv
/r/LocalLLaMA/comments/1mduvcv/lightweight_chatgpt_client_using_your_own_api_key/
false
false
https://b.thumbs.redditm…d4UjkTBNOiZM.jpg
3
{'enabled': False, 'images': [{'id': 'RQPjjxfFn-CT5m0VSeqTmjIHMcTtxm4IvmImXdu-_SU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RQPjjxfFn-CT5m0VSeqTmjIHMcTtxm4IvmImXdu-_SU.png?width=108&crop=smart&auto=webp&s=0b1c5b177de3bdd77d25f129fd287e4e014dea2b', 'width': 108}, {'height': 108, 'url': 'h...
Page Assist
0
Can Page Assist from n4ze3m be the best thing ever happened to Ollama after trying their new own GUI (and all the others btw) ? A superlight browser extension with everything I need and more.People who tried it,what do you think?
2025-07-31T07:02:17
https://www.reddit.com/r/LocalLLaMA/comments/1mduqj2/page_assist/
Illustrious-Dot-6888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mduqj2
false
null
t3_1mduqj2
/r/LocalLLaMA/comments/1mduqj2/page_assist/
false
false
self
0
null
Local. Open Source App with MCP Server compatability
0
Currently use OpenWebUI but MCP support on it is at best a struggle. Is there any open source app that can run MCP servers with local LLMs?
2025-07-31T06:51:20
https://www.reddit.com/r/LocalLLaMA/comments/1mduk5t/local_open_source_app_with_mcp_server/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mduk5t
false
null
t3_1mduk5t
/r/LocalLLaMA/comments/1mduk5t/local_open_source_app_with_mcp_server/
false
false
self
0
null
Help for new LLM Rig
0
Hello fellow Redditors, i need to ask for your wisdom. I got the request from my Boss to implement a localy running AI System to analyze our company data and let our employes check for information, create knowleagebases and create nice sounding emails. So now to my "Problem", i got a budget of about 5000€ to build a ...
2025-07-31T06:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1mdui1j/help_for_new_llm_rig/
juli199696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdui1j
false
null
t3_1mdui1j
/r/LocalLLaMA/comments/1mdui1j/help_for_new_llm_rig/
false
false
self
0
null
Models that are good in understanding and producing German text?
0
Hi. Has anyone tried multiple (below 100B) models against text understanding tasks on German or other "mainstream" European languages? Are any of the more famous ones (Gemma3 27b, Qwens, DeepSeek etc) particularly superior or inferior in this? Thanks
2025-07-31T06:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1mdug0j/models_that_are_good_in_understanding_and/
ihatebeinganonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdug0j
false
null
t3_1mdug0j
/r/LocalLLaMA/comments/1mdug0j/models_that_are_good_in_understanding_and/
false
false
self
0
null
Can llama-swap work without specifying the "model" field in API requests?
0
I'm running llama-swap and trying to simplify my client integration. Ideally, I want the client to always hit the same endpoint (/v1/chat/completions) without specifying a model name in the payload. I want to run only one model at a time, and be able to switch that model from the llama-swap Web UI or by reloading the ...
2025-07-31T06:43:50
https://www.reddit.com/r/LocalLLaMA/comments/1mdufwb/can_llamaswap_work_without_specifying_the_model/
discoveringnature12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdufwb
false
null
t3_1mdufwb
/r/LocalLLaMA/comments/1mdufwb/can_llamaswap_work_without_specifying_the_model/
false
false
self
0
null
Training loss is higher than validation loss for a few steps
5
Hi, I'm currently fine tuning a Llama 3.1 instruct model with my own dataset (train/test split done before data augmentation on train set), and i have noticed than no matter what parameters i change or if i split my datas without Data Augmentation, the training loss is always higher than the validation loss. For exa...
2025-07-31T06:39:37
https://www.reddit.com/r/LocalLLaMA/comments/1mdudj3/training_loss_is_higher_than_validation_loss_for/
Head_Mushroom_3748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdudj3
false
null
t3_1mdudj3
/r/LocalLLaMA/comments/1mdudj3/training_loss_is_higher_than_validation_loss_for/
false
false
self
5
null
Is there a way to download more Kokoro tts voices?
4
I was so impressed the first time i used kokoro tts and i was just wondering, there must have more of these extra voices where i can download in github right or are those default voice packs the only ones there is?
2025-07-31T06:32:26
https://www.reddit.com/r/LocalLLaMA/comments/1mdu9gr/is_there_a_way_to_download_more_kokoro_tts_voices/
Jacob12have
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdu9gr
false
null
t3_1mdu9gr
/r/LocalLLaMA/comments/1mdu9gr/is_there_a_way_to_download_more_kokoro_tts_voices/
false
false
self
4
null
Breakout clone by Devstral and Qwen3 30B A3B Thinking with particle effects and Web Audio reverb.
3
[Qwen3 30B A3B Thinking GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF) [Devstral Small 1.1 GGUF](https://huggingface.co/unsloth/Devstral-Small-2507-GGUF) Qwen essentially set up the code and Devstral debugged it. Devstral added the nice Web Audio sound effects while Qwen implemented the halway ...
2025-07-31T06:31:03
https://codepen.io/mars-and-bars/full/OPyWjMy
EuphoricPenguin22
codepen.io
1970-01-01T00:00:00
0
{}
1mdu8p0
false
null
t3_1mdu8p0
/r/LocalLLaMA/comments/1mdu8p0/breakout_clone_by_devstral_and_qwen3_30b_a3b/
false
false
default
3
null
where is UK and India?
0
We just see companies from US, China, and the only seed of France, mistralai. Where is UK, the pair of France, and India, the nation with most population.
2025-07-31T06:29:38
https://www.reddit.com/r/LocalLLaMA/comments/1mdu7se/where_is_uk_and_india/
Remarkable-Pea645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdu7se
false
null
t3_1mdu7se
/r/LocalLLaMA/comments/1mdu7se/where_is_uk_and_india/
false
false
self
0
null
I Built a Full Stack App Using a Local LLM (GLM 4.5 Air) and RooCode. Here's How It Went
8
Like the title says, I ran **GLM 4.5 Air Q4** on my local machine using **RooCode** inside **VS Code**, and I was able to build a functional CRUD-style web application. Users can register with a password, log in, and log out from the client side. All authentication is handled using JWTs. The experience honestly excee...
2025-07-31T06:23:49
https://www.reddit.com/r/LocalLLaMA/comments/1mdu4io/i_built_a_full_stack_app_using_a_local_llm_glm_45/
gamblingapocalypse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdu4io
false
null
t3_1mdu4io
/r/LocalLLaMA/comments/1mdu4io/i_built_a_full_stack_app_using_a_local_llm_glm_45/
false
false
self
8
null
Resources for developing tools around LLMs?
1
[removed]
2025-07-31T05:25:28
https://www.reddit.com/r/LocalLLaMA/comments/1mdt5el/resources_for_developing_tools_around_llms/
mjomdal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdt5el
false
null
t3_1mdt5el
/r/LocalLLaMA/comments/1mdt5el/resources_for_developing_tools_around_llms/
false
false
self
1
null
Tested Claude Code with GLM-4.5-Air and it works pretty darn well, but also explains things in Chinese :D
1
2025-07-31T04:51:39
https://i.redd.it/cxtqa73l45gf1.png
Alarming-Presence-93
i.redd.it
1970-01-01T00:00:00
0
{}
1mdskda
false
null
t3_1mdskda
/r/LocalLLaMA/comments/1mdskda/tested_claude_code_with_glm45air_and_it_works/
false
false
default
1
{'enabled': True, 'images': [{'id': 'cxtqa73l45gf1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/cxtqa73l45gf1.png?width=108&crop=smart&auto=webp&s=b3ef0cd14a1ed96d34922d033299e3828a54d00a', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/cxtqa73l45gf1.png?width=216&crop=smart&auto=we...
Unbelievable: China Dominates Top 10 Open-Source Models on HuggingFace
835
https://preview.redd.it/…gface.co/models)
2025-07-31T04:50:27
https://www.reddit.com/r/LocalLLaMA/comments/1mdsjn2/unbelievable_china_dominates_top_10_opensource/
jiawei243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdsjn2
false
null
t3_1mdsjn2
/r/LocalLLaMA/comments/1mdsjn2/unbelievable_china_dominates_top_10_opensource/
false
false
https://a.thumbs.redditm…_UBbzkN3H6v4.jpg
835
null
Best thing Youve automated?
2
I love efficiency. I’m always hoping to find a solution that allows me to automate basic coding tasks like “create some css that makes a menu that looks like this” to leave running while I go to work. Main problem with this currently is that AI will often stop and declare it’s done, and then you have to make fixes to w...
2025-07-31T04:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1mdshnt/best_thing_youve_automated/
Shadow-Amulet-Ambush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdshnt
false
null
t3_1mdshnt
/r/LocalLLaMA/comments/1mdshnt/best_thing_youve_automated/
false
false
self
2
null
I started running Claude Code with glm-4.5-air and now it's explaining things to me in Chinese -- works pretty well though!
1
https://preview.redd.it/…4fba23f8508a5c6)
2025-07-31T04:46:26
https://www.reddit.com/r/LocalLLaMA/comments/1mdsh60/i_started_running_claude_code_with_glm45air_and/
Alarming-Presence-93
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdsh60
false
null
t3_1mdsh60
/r/LocalLLaMA/comments/1mdsh60/i_started_running_claude_code_with_glm45air_and/
false
false
https://b.thumbs.redditm…XSW7PCZq_YZQ.jpg
1
null
Is vast.ai fucking me over?
5
When I run nvidia-smi, It shows that I only have access to 1/8 of the GPU. https://preview.redd.it/6zgjw8o035gf1.png?width=1418&format=png&auto=webp&s=8ead6b954f5e7370b532380b2597ad96fca7a924 Also, when I ran vastai show instance 24505676 I saw the gpu\_frac was only 1/8. I thought I was going to get my own exclu...
2025-07-31T04:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1mdsgax/is_vastai_fucking_me_over/
jfang00007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdsgax
false
null
t3_1mdsgax
/r/LocalLLaMA/comments/1mdsgax/is_vastai_fucking_me_over/
false
false
https://b.thumbs.redditm…bjzTt2_6y0Hg.jpg
5
null
Can we make a reward system for LLMs that operates like drug addiction? When the model gets things right, it gets a hit. Faster and better the solution, the larger the hit. Fail? Withdrawals.
0
Is this a viable solution to alignment?
2025-07-31T04:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1mds1gx/can_we_make_a_reward_system_for_llms_that/
Fit-Produce420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mds1gx
false
null
t3_1mds1gx
/r/LocalLLaMA/comments/1mds1gx/can_we_make_a_reward_system_for_llms_that/
false
false
self
0
null
Best local LLM that can read text in images? (8 GB graphic card)
6
What's the best local model to run these days on 8 GB RAM card that can read images with text in them?
2025-07-31T04:15:44
https://www.reddit.com/r/LocalLLaMA/comments/1mdrxio/best_local_llm_that_can_read_text_in_images_8_gb/
fariazz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdrxio
false
null
t3_1mdrxio
/r/LocalLLaMA/comments/1mdrxio/best_local_llm_that_can_read_text_in_images_8_gb/
false
false
self
6
null
Why is open source so behind on multi-modalitty?
80
We're in the era now where open source releases are nipping at the heels of closed-source models in benchmarks. But it's all in text modality. As far as I can tell, there hasn't been a really solid contender when it comes to both being a SOTA model, and also having native audio/image/video input and image/audio outp...
2025-07-31T04:11:01
https://www.reddit.com/r/LocalLLaMA/comments/1mdruc9/why_is_open_source_so_behind_on_multimodalitty/
AnticitizenPrime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdruc9
false
null
t3_1mdruc9
/r/LocalLLaMA/comments/1mdruc9/why_is_open_source_so_behind_on_multimodalitty/
false
false
self
80
null
I have been learning more about reinforcement learning with verifiable rewards I want to hear people's opinions on that
6
I have seen lot of new research talking about RLVR and I want to understand how people are defining rewards for more subjective tasks , like even in coding itself if a code runs doesn't necessarily mean that it has achieved what we have in the intial prompt , if there are any other blogs or research papers that you can...
2025-07-31T04:02:02
https://i.redd.it/y66synvtv4gf1.jpeg
Able_Transition_1692
i.redd.it
1970-01-01T00:00:00
0
{}
1mdro7c
false
null
t3_1mdro7c
/r/LocalLLaMA/comments/1mdro7c/i_have_been_learning_more_about_reinforcement/
false
false
default
6
{'enabled': True, 'images': [{'id': 'y66synvtv4gf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/y66synvtv4gf1.jpeg?width=108&crop=smart&auto=webp&s=5cbe9d7cd5c35fff63200b6eed7dcac03cf6a088', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/y66synvtv4gf1.jpeg?width=216&crop=smart&auto=...
If there a TTS model that works with IPA?
5
I'm looking for a model to sound out some dead languages (such as Old English, Proto-Germanic, Proto-Indo-European) Is there a TTS model which receives IPA characters or any other form of phonetic notation directly (maybe also with a language tag for the accent)?
2025-07-31T03:46:36
https://www.reddit.com/r/LocalLLaMA/comments/1mdrdal/if_there_a_tts_model_that_works_with_ipa/
schattig_eenhoorntje
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdrdal
false
null
t3_1mdrdal
/r/LocalLLaMA/comments/1mdrdal/if_there_a_tts_model_that_works_with_ipa/
false
false
self
5
null
Setup for MOE
4
Maybe I missed it or something, but how are people running MOE models and getting decent speeds? I rented a 3090 on runpod since it also had like 124 gb of RAM. I compiled llama.cpp on it. Got my usual speeds for Qwen3 32B q4km completely offloaded to the 3090. Then tried qwen3 235b MOE at q3km and got prompt eval tim...
2025-07-31T03:06:43
https://www.reddit.com/r/LocalLLaMA/comments/1mdqlc6/setup_for_moe/
fgoricha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdqlc6
false
null
t3_1mdqlc6
/r/LocalLLaMA/comments/1mdqlc6/setup_for_moe/
false
false
self
4
null
works well!: GLM 4.5 air (MLX) - LM studio (Mac) - Claude code
49
# How I Got claude-code to Work with a Local LLM (via LM Studio) Using a Custom Proxy Hey everyone, I wanted to share a little setup I put together. I was trying to run `claude-code` with a locally hosted model, `glm-4.5-air`, through **LM Studio on my Mac**. I ran into some issues, so I quickly whipped up a proxy...
2025-07-31T03:03:51
https://www.reddit.com/r/LocalLLaMA/comments/1mdqj9g/works_well_glm_45_air_mlx_lm_studio_mac_claude/
ziozzang0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdqj9g
false
null
t3_1mdqj9g
/r/LocalLLaMA/comments/1mdqj9g/works_well_glm_45_air_mlx_lm_studio_mac_claude/
false
false
self
49
{'enabled': False, 'images': [{'id': 'TxoyzwOuovAjomlGsEp03rYjsFxL0tZoKO1Cb4HESDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TxoyzwOuovAjomlGsEp03rYjsFxL0tZoKO1Cb4HESDw.png?width=108&crop=smart&auto=webp&s=cef3d87e86e676c2dd97e4c186c290a3a30d01ec', 'width': 108}, {'height': 108, 'url': 'h...
Ollama 0.10 - New app is available for macOS and Windows plus multi-GPU performance improvements, and more
22
2025-07-31T02:43:03
https://github.com/ollama/ollama/releases/tag/v0.10.0
mj3815
github.com
1970-01-01T00:00:00
0
{}
1mdq3sv
false
null
t3_1mdq3sv
/r/LocalLLaMA/comments/1mdq3sv/ollama_010_new_app_is_available_for_macos_and/
false
false
default
22
{'enabled': False, 'images': [{'id': '8CQoAySJDDT43ePa2z6wKZ6f67awzR1xeHnq-ctSP9Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8CQoAySJDDT43ePa2z6wKZ6f67awzR1xeHnq-ctSP9Q.png?width=108&crop=smart&auto=webp&s=4b8de8b6845c7bac03a11d66156a8e9be1f7345d', 'width': 108}, {'height': 108, 'url': 'h...
Accessing LM Studio server from iOS
0
https://www.3sparks.net/ works for this. $5. I like it. No affiliation with the dev or company.
2025-07-31T02:41:48
https://www.reddit.com/r/LocalLLaMA/comments/1mdq2vw/accessing_lm_studio_server_from_ios/
jarec707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdq2vw
false
null
t3_1mdq2vw
/r/LocalLLaMA/comments/1mdq2vw/accessing_lm_studio_server_from_ios/
false
false
self
0
{'enabled': False, 'images': [{'id': '8m3xkR53oipXMpX8xjKw3MX_Z-Fd-VyJGobDepE9G1Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8m3xkR53oipXMpX8xjKw3MX_Z-Fd-VyJGobDepE9G1Y.jpeg?width=108&crop=smart&auto=webp&s=d5d71b7b6fd273d426a70b1b3a967b6fb53a4532', 'width': 108}, {'height': 108, 'url': '...
Horizon Alpha is the new vice king on my own benckmark (legalbench.br)
1
2025-07-31T02:30:00
https://i.redd.it/kp1fofd9f4gf1.png
celsowm
i.redd.it
1970-01-01T00:00:00
0
{}
1mdpu0s
false
null
t3_1mdpu0s
/r/LocalLLaMA/comments/1mdpu0s/horizon_alpha_is_the_new_vice_king_on_my_own/
false
false
default
1
{'enabled': True, 'images': [{'id': 'kp1fofd9f4gf1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/kp1fofd9f4gf1.png?width=108&crop=smart&auto=webp&s=d57739aba068d58fec42e7b2167fbdd1a1c83e8b', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/kp1fofd9f4gf1.png?width=216&crop=smart&auto=web...
Host Minimax on cloud?
0
Hello guys. I want to host Minimax 40k on Huawei cloud server. The issue is when I got clone it takes two much time and has size in TBs. Can you share any method to efficiently host it on cloud. P.S. This is a requirement from client. I need to host it on cloud server
2025-07-31T02:24:19
https://www.reddit.com/r/LocalLLaMA/comments/1mdppt1/host_minimax_on_cloud/
aloy_aerith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdppt1
false
null
t3_1mdppt1
/r/LocalLLaMA/comments/1mdppt1/host_minimax_on_cloud/
false
false
nsfw
0
null
Is Layla good to use and is it actually run only in local?
1
I just saw ad for Layla app and it look good. I just feel, how good it can be if it only run on local. It has to have very limited data. Also if my phone is going to be its server to operate, it has to consume a lot of power. Do somebody here has any experience with this app? Does it really work how they advertise?
2025-07-31T02:13:33
https://www.reddit.com/r/LocalLLaMA/comments/1mdphif/is_layla_good_to_use_and_is_it_actually_run_only/
Capasak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdphif
false
null
t3_1mdphif
/r/LocalLLaMA/comments/1mdphif/is_layla_good_to_use_and_is_it_actually_run_only/
false
false
self
1
null
Made a unified table of benchmarks using AI
74
They keep putting different reference models in their graphs and we have to look at many graphs to see where we're at so I used AI to put them all in a single table. If any of you find errors, I'll delete this post.
2025-07-31T02:11:04
https://i.redd.it/gxir7usrb4gf1.png
DrVonSinistro
i.redd.it
1970-01-01T00:00:00
0
{}
1mdpfm8
false
null
t3_1mdpfm8
/r/LocalLLaMA/comments/1mdpfm8/made_a_unified_table_of_benchmarks_using_ai/
false
false
default
74
{'enabled': True, 'images': [{'id': 'gxir7usrb4gf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/gxir7usrb4gf1.png?width=108&crop=smart&auto=webp&s=f70288958d623f870f4e10912a44bb9b39cc5409', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/gxir7usrb4gf1.png?width=216&crop=smart&auto=web...
Horizon-alpha: A new stealthed model on openrouter sweeps EQ-Bench leaderboards
107
[https://eqbench.com/](https://eqbench.com/) Creative Writing Samples: [https://eqbench.com/results/creative-writing-v3/openrouter\_\_horizon-alpha.html](https://eqbench.com/results/creative-writing-v3/openrouter__horizon-alpha.html) Longform Writing Samples: [https://eqbench.com/results/creative-writing-longform/ope...
2025-07-31T02:09:14
https://www.reddit.com/gallery/1mdpe8v
_sqrkl
reddit.com
1970-01-01T00:00:00
0
{}
1mdpe8v
false
null
t3_1mdpe8v
/r/LocalLLaMA/comments/1mdpe8v/horizonalpha_a_new_stealthed_model_on_openrouter/
false
false
https://b.thumbs.redditm…_fMeERdxNnXQ.jpg
107
null
Valuation of companies like Anthropic
3
Anyone else get the impression that open source LLMs will wipe out the valuation of companies like Anthropic? New, competitive models are getting released nearly every day lately. Many can handle 80-90% of standard tasks. It is starting to look like a race to the bottom.
2025-07-31T02:07:52
https://www.reddit.com/r/LocalLLaMA/comments/1mdpd70/valuation_of_companies_like_anthropic/
seoulsrvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdpd70
false
null
t3_1mdpd70
/r/LocalLLaMA/comments/1mdpd70/valuation_of_companies_like_anthropic/
false
false
self
3
null
domoai’s 360 view lets you animate full spins like leiapix but it’s actually 3d
0
[leiapix](https://leiapix-ai.com/#google_vignette) gives parallax depth but no real rotation. [domoai](https://www.domoai.app/home?via=081621AUG)'s 360 view rotates the entire character like a turntable. tried it on a cartoon cat and got a full clean spin. same file worked for a hug scene after too. anyone tried this o...
2025-07-31T01:38:05
https://www.reddit.com/r/LocalLLaMA/comments/1mdoqnv/domoais_360_view_lets_you_animate_full_spins_like/
Neat_Chapter_9055
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdoqnv
false
null
t3_1mdoqnv
/r/LocalLLaMA/comments/1mdoqnv/domoais_360_view_lets_you_animate_full_spins_like/
false
false
self
0
null
Is there any way to train when a model sends messages?
0
I'm looking to finetune a model not only on the style and content of messages but also on the times they are sent/delay time in a conversation given a message history on something like discord. Does anyone know how this could be done? Thank you.
2025-07-31T01:31:12
https://www.reddit.com/r/LocalLLaMA/comments/1mdolik/is_there_any_way_to_train_when_a_model_sends/
SignificanceSad562
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdolik
false
null
t3_1mdolik
/r/LocalLLaMA/comments/1mdolik/is_there_any_way_to_train_when_a_model_sends/
false
false
self
0
null
Anyone have experience with NVIDIA Nemotron?
5
I've been playing around with it - seems promising but I haven't investigated it thoroughly. Does anyone have any experience with this model?
2025-07-31T01:22:21
https://www.reddit.com/r/LocalLLaMA/comments/1mdoevz/anyone_have_experience_with_nvidia_nemotron/
seoulsrvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdoevz
false
null
t3_1mdoevz
/r/LocalLLaMA/comments/1mdoevz/anyone_have_experience_with_nvidia_nemotron/
false
false
self
5
null
Why was this post removed?
1
[removed]
2025-07-31T01:19:20
https://i.redd.it/1e1senul24gf1.png
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1mdocma
false
null
t3_1mdocma
/r/LocalLLaMA/comments/1mdocma/why_was_this_post_removed/
false
false
default
1
{'enabled': True, 'images': [{'id': '1e1senul24gf1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/1e1senul24gf1.png?width=108&crop=smart&auto=webp&s=a6aa76884e744041f8ca8fe12ed79d46d42ef579', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/1e1senul24gf1.png?width=216&crop=smart&auto=webp...
So what benchmark websites do you refer to? (July 2025 edition)
7
Standard disclaimers: nobody should fully trust a benchmark website to judge a model, models should be tested separately, etc etc. So, now that we mentioned that, what websites are most useful (*as a reference point*) for how good a model is? Historically, I've used https://livebench.ai/ but they've kind of gone do...
2025-07-31T01:02:23
https://www.reddit.com/r/LocalLLaMA/comments/1mdnzym/so_what_benchmark_websites_do_you_refer_to_july/
DepthHour1669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdnzym
false
null
t3_1mdnzym
/r/LocalLLaMA/comments/1mdnzym/so_what_benchmark_websites_do_you_refer_to_july/
false
false
self
7
null
It's been a while and there is no LLaMA 4 fixes
1
[removed]
2025-07-31T00:56:10
https://www.reddit.com/r/LocalLLaMA/comments/1mdnv55/its_been_a_while_and_there_is_no_llama_4_fixes/
Electrical_Gas_77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdnv55
false
null
t3_1mdnv55
/r/LocalLLaMA/comments/1mdnv55/its_been_a_while_and_there_is_no_llama_4_fixes/
false
false
self
1
null
The DGX Spark JPN price will be $6k at one retailer
4
2025-07-31T00:48:28
https://x.com/ottoserver/status/1950366390151762172
Django_McFly
x.com
1970-01-01T00:00:00
0
{}
1mdnp8j
false
null
t3_1mdnp8j
/r/LocalLLaMA/comments/1mdnp8j/the_dgx_spark_jpn_price_will_be_6k_at_one_retailer/
false
false
default
4
null
Ideological alignment at its finest
8
https://preview.redd.it/…8d34908b8 Yeesh
2025-07-31T00:38:06
https://www.reddit.com/r/LocalLLaMA/comments/1mdnhb1/ideological_alignment_at_its_finest/
MerePotato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdnhb1
false
null
t3_1mdnhb1
/r/LocalLLaMA/comments/1mdnhb1/ideological_alignment_at_its_finest/
false
false
https://b.thumbs.redditm…QP4Q53jFnbZU.jpg
8
null
Deepseek just won the best paper award at ACL 2025 with a breakthrough innovation in long context, a model using this might come soon
528
2025-07-31T00:23:44
https://arxiv.org/abs/2502.11089
Charuru
arxiv.org
1970-01-01T00:00:00
0
{}
1mdn6dp
false
null
t3_1mdn6dp
/r/LocalLLaMA/comments/1mdn6dp/deepseek_just_won_the_best_paper_award_at_acl/
false
false
default
528
null
Chinese models pulling away
1,218
2025-07-31T00:06:15
https://i.redd.it/727keqreo3gf1.png
Kniffliger_Kiffer
i.redd.it
1970-01-01T00:00:00
0
{}
1mdmsu9
false
null
t3_1mdmsu9
/r/LocalLLaMA/comments/1mdmsu9/chinese_models_pulling_away/
false
false
default
1,218
{'enabled': True, 'images': [{'id': '727keqreo3gf1', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/727keqreo3gf1.png?width=108&crop=smart&auto=webp&s=e6a70ba5db010ef5c37f2d20d7547480395fec85', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/727keqreo3gf1.png?width=216&crop=smart&auto=we...
DIY LLM inference engine learning
5
Hey, I’m a shit tier software engineer with 25 years of experience writing shitty web apps. I’d like to make my own LLM inference language, preferably not in a QR code or Typescript Types, just to help me understand how they work under the hood. Point me to learning books and resources so I can make this happen.
2025-07-31T00:04:13
https://www.reddit.com/r/LocalLLaMA/comments/1mdmr8m/diy_llm_inference_engine_learning/
createthiscom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdmr8m
false
null
t3_1mdmr8m
/r/LocalLLaMA/comments/1mdmr8m/diy_llm_inference_engine_learning/
false
false
self
5
null
Help choosing between Ollama, llama.cpp, or something else for background LLM server (used with dictation)
1
I'm setting up a local LLM to run in the background on my MacBook Pro (M3 Pro). The main use case is this: I use a dictation app (like SuperWhisper or Spokenly) to convert my voice to text, and then send that text to a local LLM server for processing. Think: summarizing, answering, rephrasing, correction, or responding...
2025-07-30T23:43:16
https://www.reddit.com/r/LocalLLaMA/comments/1mdma9a/help_choosing_between_ollama_llamacpp_or/
discoveringnature12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdma9a
false
null
t3_1mdma9a
/r/LocalLLaMA/comments/1mdma9a/help_choosing_between_ollama_llamacpp_or/
false
false
self
1
null
How to optimize TPS using IK_llama.cpp?
2
I've been working with artificial intelligence, It's been working for the past few months but I've gotten annoyed with the performance. I've just switched to **IK\_llama.cpp**, and I'm looking to optimize my command although I haven't found any documentation. I've managed to get it around **15 t/s** (quite good, but I'...
2025-07-30T23:21:27
https://www.reddit.com/r/LocalLLaMA/comments/1mdlss2/how_to_optimize_tps_using_ik_llamacpp/
Final-Message2150
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdlss2
false
null
t3_1mdlss2
/r/LocalLLaMA/comments/1mdlss2/how_to_optimize_tps_using_ik_llamacpp/
false
false
self
2
null
Hey everyone I'm pretty new at this. I'm a designer please help me. Stupid Question
0
**Goal:** I'm building a local AI assistant — like a voice-based Alfred — that runs entirely on my machine. I've already downloaded and installed **LLaMA 2 13B Q5 Chat** for this purpose. However, I've noticed that the chat model includes certain **filters** or restrictions that limit the assistant’s responses. In m...
2025-07-30T23:14:51
https://www.reddit.com/r/LocalLLaMA/comments/1mdln75/hey_everyone_im_pretty_new_at_this_im_a_designer/
Iamtheguyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdln75
false
null
t3_1mdln75
/r/LocalLLaMA/comments/1mdln75/hey_everyone_im_pretty_new_at_this_im_a_designer/
false
false
self
0
null
Kimi K2 vs Claude 4 Sonnet - Unexpected Review Result (400k token Codebase)
50
I tested Kimi K2 again, against Claude 4 Sonnet (Sonnet 4) this time, here are my findings (vid in comments): \- K2 isn't only less reliable in VSCode tool calling, it's considerably less in Cline as well, vs Claude 4 Sonnet \- I integrated K2 via OpenRouter inference into my own application LIVE and it did the same ...
2025-07-30T23:03:20
https://www.reddit.com/r/LocalLLaMA/comments/1mdldom/kimi_k2_vs_claude_4_sonnet_unexpected_review/
marvijo-software
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdldom
false
null
t3_1mdldom
/r/LocalLLaMA/comments/1mdldom/kimi_k2_vs_claude_4_sonnet_unexpected_review/
false
false
self
50
{'enabled': False, 'images': [{'id': 'dv_I54LGpmnqKoSxBiYuiXlgStoZanHgVx1garYxUvY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dv_I54LGpmnqKoSxBiYuiXlgStoZanHgVx1garYxUvY.jpeg?width=108&crop=smart&auto=webp&s=99ec0a4d4a5f8c158298ac9030280b7e7f862186', 'width': 108}, {'height': 162, 'url': '...
New to this and trying to learn on the fly
3
Be gentle, this is my first time lol I have a small home network and decided to build out a local llm to handle work stuff with more security. Background: Wife has been using ChatGPT for a few months, and I started investigating other cloud based tools. I found that if you want to do any real work, you need about...
2025-07-30T22:58:19
https://www.reddit.com/r/LocalLLaMA/comments/1mdl999/new_to_this_and_trying_to_learn_on_the_fly/
JellyfishAutomatic25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdl999
false
null
t3_1mdl999
/r/LocalLLaMA/comments/1mdl999/new_to_this_and_trying_to_learn_on_the_fly/
false
false
self
3
null
AMD released a fully open source model 1B
0
2025-07-30T22:31:35
https://i.redd.it/l2q8mdvs83gf1.png
dayladen
i.redd.it
1970-01-01T00:00:00
0
{}
1mdkmd8
false
null
t3_1mdkmd8
/r/LocalLLaMA/comments/1mdkmd8/amd_released_a_fully_open_source_model_1b/
false
false
https://b.thumbs.redditm…2NyXWaJUyo4E.jpg
0
{'enabled': True, 'images': [{'id': 'r9j4Hmo5MgTMwlrSri5SrDr61Wvw8Py4HpL7juk-bYc', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/l2q8mdvs83gf1.png?width=108&crop=smart&auto=webp&s=6bb716fe6ecce3e4610a38010478aa1e53bc78a8', 'width': 108}, {'height': 239, 'url': 'https://preview.redd.it/l2q8mdvs83gf1.pn...
In light of recent events
65
On a serious note, do y'all think that if Anthropic ever fails, will they release something even an older model as open source? Or will they drag everything that they have researched down into the grave?
2025-07-30T22:25:16
https://i.redd.it/v5ggd7zq73gf1.jpeg
uksiev
i.redd.it
1970-01-01T00:00:00
0
{}
1mdkgx7
false
null
t3_1mdkgx7
/r/LocalLLaMA/comments/1mdkgx7/in_light_of_recent_events/
false
false
default
65
{'enabled': True, 'images': [{'id': 'v5ggd7zq73gf1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/v5ggd7zq73gf1.jpeg?width=108&crop=smart&auto=webp&s=99c86a8aca1e2be4d6b30b67cd0e38124fa7d4e3', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/v5ggd7zq73gf1.jpeg?width=216&crop=smart&auto=...
How to locally run Grok 4 with 2x AMD 7900 XTX GPUs? (24 GB VRAM x2)
0
Heard Grok 4 runs well on AMD, wondering how to run it locally, and what the benefits are compared to running it as a web app like almost everyone else does.
2025-07-30T22:11:49
https://www.reddit.com/r/LocalLLaMA/comments/1mdk516/how_to_locally_run_grok_4_with_2x_amd_7900_xtx/
PolyglotGeologist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdk516
false
null
t3_1mdk516
/r/LocalLLaMA/comments/1mdk516/how_to_locally_run_grok_4_with_2x_amd_7900_xtx/
false
false
self
0
null
Weird issue running qwen3-30b-a3b-thinking in llama.cpp and openwebui on my 4090 and 64GB of RAM rig, Q4_K_M
2
I open llama.cpp, start a server, then open openwebui to use the model, then after that, it starts generating, but then like 3 minutes after getting into a long coding task, it grinds to a halt, and then my monitors disconnect and I have to shut off and turn on the PC again. How do I fix that? Is it just that Q4KM is t...
2025-07-30T22:10:50
https://www.reddit.com/r/LocalLLaMA/comments/1mdk46y/weird_issue_running_qwen330ba3bthinking_in/
Pro-editor-1105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdk46y
false
null
t3_1mdk46y
/r/LocalLLaMA/comments/1mdk46y/weird_issue_running_qwen330ba3bthinking_in/
false
false
self
2
null
The Holy Grail
0
I've been anticipating what I consider the "Holy Grail" of large language models ever since the launch of ChatGPT. To me, that means a model capable of running locally on consumer-grade computers, without the need for quantization, and meeting the following technical criteria: * Inference throughput of at least 20 tok...
2025-07-30T21:55:58
https://www.reddit.com/r/LocalLLaMA/comments/1mdjqy5/the_holy_grail/
No-Search9350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdjqy5
false
null
t3_1mdjqy5
/r/LocalLLaMA/comments/1mdjqy5/the_holy_grail/
false
false
self
0
null
Rethinking AI Voice Agent Infrastructure — Open Source Approach with OpenInteractions
1
Hey folks, I’ve been working on [**OpenInteractions**](https://www.openinteractions.live), an **open-source voice AI infrastructure** built for real-time, low-latency conversations. OSS Repo: [https://github.com/OpenInteractions/OpenInteractions](https://github.com/OpenInteractions/OpenInteractions) Project site...
2025-07-30T21:54:28
https://www.reddit.com/r/LocalLLaMA/comments/1mdjplf/rethinking_ai_voice_agent_infrastructure_open/
openinteractions
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdjplf
false
null
t3_1mdjplf
/r/LocalLLaMA/comments/1mdjplf/rethinking_ai_voice_agent_infrastructure_open/
false
false
self
1
null
GPT spending money on marketing = GPT 5 delays
0
Guerrilla marketing. I wish GPT o3 was as good. They'd need to market less that way https://preview.redd.it/9owo35di03gf1.png?width=1001&format=png&auto=webp&s=3ffb74a551e96259fb5ca616747858c9c4dfd6fd
2025-07-30T21:49:11
https://www.reddit.com/r/LocalLLaMA/comments/1mdjl0q/gpt_spending_money_on_marketing_gpt_5_delays/
TadpoleNorth1773
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdjl0q
false
null
t3_1mdjl0q
/r/LocalLLaMA/comments/1mdjl0q/gpt_spending_money_on_marketing_gpt_5_delays/
false
false
https://b.thumbs.redditm…2CfthGFg7MEw.jpg
0
null
After 6 months of fiddling with local AI. Here’s my curated models list that work for 90% of my needs. What’s yours?
278
All models are from Unsloth UD Q4_K_XL except for Gemma3-27B is IQ3. Running all these with 10-12k context with 4-30 t/s across all models. Most used ones are Mistral-24B, Gemma3-27B, and Granite3.3-2B. Mistral and Gemma are for general QA and random text tools. Granite is for article summaries and random small RAG re...
2025-07-30T21:38:07
https://i.redd.it/jzljyi4tw2gf1.jpeg
simracerman
i.redd.it
1970-01-01T00:00:00
0
{}
1mdjb67
false
null
t3_1mdjb67
/r/LocalLLaMA/comments/1mdjb67/after_6_months_of_fiddling_with_local_ai_heres_my/
false
false
default
278
{'enabled': True, 'images': [{'id': 'jzljyi4tw2gf1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/jzljyi4tw2gf1.jpeg?width=108&crop=smart&auto=webp&s=85504c1d503f59db68dd29902ebe53c3ae9805bf', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/jzljyi4tw2gf1.jpeg?width=216&crop=smart&auto=...
Complete Mistral Coding Stack but for enterprise only
19
[https://mistral.ai/news/codestral-25-08](https://mistral.ai/news/codestral-25-08) Mistral just release new version of codestral & entire coding stack. but god.. for enterprise only.. the heck ? don't understand the move of blocking usual coder & shadow it \^\^'
2025-07-30T21:32:29
https://www.reddit.com/r/LocalLLaMA/comments/1mdj5ww/complete_mistral_coding_stack_but_for_enterprise/
Edereum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mdj5ww
false
null
t3_1mdj5ww
/r/LocalLLaMA/comments/1mdj5ww/complete_mistral_coding_stack_but_for_enterprise/
false
false
self
19
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'h...