title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Simplest way using Claude Code with GLM-4.5
7
* Get API key from [open.bigmodel.cn](https://open.bigmodel.cn/usercenter/proj-mgmt/apikeys) * Config your env as below: ​ export ANTHROPIC_BASE_URL=https://open.bigmodel.cn/api/anthropic export ANTHROPIC_AUTH_TOKEN={YOUR_API_KEY} Enjoy it!
2025-08-13T03:13:07
https://www.reddit.com/r/LocalLLaMA/comments/1motf6j/simplest_way_using_claude_code_with_glm45/
Middle-Copy4577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1motf6j
false
null
t3_1motf6j
/r/LocalLLaMA/comments/1motf6j/simplest_way_using_claude_code_with_glm45/
false
false
self
7
null
What’s your experience with GLM-4.5? Pros and cons?
26
I’ve been using it alongside **Claude Code**, and in my experience it handles most ordinary coding tasks flawlessly. I’m curious how it stacks up against other models in terms of reasoning depth, code quality, and ability to handle edge cases.
2025-08-13T03:08:34
https://www.reddit.com/r/LocalLLaMA/comments/1motbnk/whats_your_experience_with_glm45_pros_and_cons/
Middle-Copy4577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1motbnk
false
null
t3_1motbnk
/r/LocalLLaMA/comments/1motbnk/whats_your_experience_with_glm45_pros_and_cons/
false
false
self
26
null
Strix Halo with dGPU?
6
Anyone tried using Strix Halo with a dGPU for LLM inference? Wondering if it works over PCIe or with an external GPU.
2025-08-13T02:41:43
https://www.reddit.com/r/LocalLLaMA/comments/1mosrki/strix_halo_with_dgpu/
Admirable_Flower_287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mosrki
false
null
t3_1mosrki
/r/LocalLLaMA/comments/1mosrki/strix_halo_with_dgpu/
false
false
self
6
null
OpenCUA: Open Foundations for Computer-Use Agents
4
Project Page: [https://opencua.xlang.ai/](https://opencua.xlang.ai/) Models: [https://huggingface.co/collections/xlangai/opencua-open-foundations-for-computer-use-agents-6882014ebecdbbe46074a68d](https://huggingface.co/collections/xlangai/opencua-open-foundations-for-computer-use-agents-6882014ebecdbbe46074a68d) Data...
2025-08-13T02:32:59
https://arxiv.org/abs/2508.09123
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1moskq3
false
null
t3_1moskq3
/r/LocalLLaMA/comments/1moskq3/opencua_open_foundations_for_computeruse_agents/
false
false
default
4
null
Kyutai TTS voice embeddings
2
After a lot of thought, I’ve decided to release a version of the Mimi voice embedder for kyutais tts model. The model is gated on Hugging Face with automatic access due to legal concerns as I am in the EU. If Kyutai ask me to remove this model I will, as Iove their work and dont want to get them into legal trouble. Ill...
2025-08-13T02:21:07
https://www.reddit.com/r/LocalLLaMA/comments/1mosbrf/kyutai_tts_voice_embeddings/
SovietWarBear17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mosbrf
false
null
t3_1mosbrf
/r/LocalLLaMA/comments/1mosbrf/kyutai_tts_voice_embeddings/
false
false
self
2
null
this is an idea , Jan-v1-4B+ SearXNG
13
I think this would be a solution to not slow down our PC with docker https://preview.redd.it/qz0oek8v0pif1.png?width=770&format=png&auto=webp&s=2b3954e4109dad53ce2b2941dd7c17b34851ebc7
2025-08-13T01:54:22
https://www.reddit.com/r/LocalLLaMA/comments/1morrnl/this_is_an_idea_janv14b_searxng/
Illustrious-Swim9663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1morrnl
false
null
t3_1morrnl
/r/LocalLLaMA/comments/1morrnl/this_is_an_idea_janv14b_searxng/
false
false
https://b.thumbs.redditm…PaChqXftSTzY.jpg
13
null
There's an epidemic of copy/paste reddit posts being written by GPT. Have you noticed? I recently joined r/ChatGPT to watch GPT 5 meltdowns (understandable imo) and keep stumbling on GPT written posts.
1
I'm surprised there isn't more self awareness. I come to reddit for the humans not prompt outputs that all have the exact same crescendo.
2025-08-13T01:38:18
https://www.reddit.com/r/LocalLLaMA/comments/1morfbb/theres_an_epidemic_of_copypaste_reddit_posts/
Ok-Application-2261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1morfbb
false
null
t3_1morfbb
/r/LocalLLaMA/comments/1morfbb/theres_an_epidemic_of_copypaste_reddit_posts/
false
false
self
1
null
Msty Not Seeing My Local Models
0
So I installed Ollama and pulled a few models and it's **awesome**...in the cli. I then tried to install Msty Studio for a nice front end interface and it installed it's own (small) model as an example. I'm following [this](https://www.youtube.com/watch?v=GWB9ApTPTv4) tutorial and what's in Msty Studio totally diverg...
2025-08-13T01:32:33
https://www.reddit.com/r/LocalLLaMA/comments/1moraw6/msty_not_seeing_my_local_models/
djfrodo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moraw6
false
null
t3_1moraw6
/r/LocalLLaMA/comments/1moraw6/msty_not_seeing_my_local_models/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ywv9_kjre7NEENu05WnxXbSK82dbtSr8q6EUp4HTMVI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ywv9_kjre7NEENu05WnxXbSK82dbtSr8q6EUp4HTMVI.jpeg?width=108&crop=smart&auto=webp&s=d493007b545457e39059733a336bf7bfed0fd52d', 'width': 108}, {'height': 162, 'url': '...
Someone just extracted the base model from gpt-oss 20b and released it
239
Some interesting bits from the thread > turning gpt-oss back into a base model appears to have trivially reversed its alignment > it will tell us how to build a bomb. it will list all the curse words it knows. it will plan a robbery for me. > MEMORIZATION > after basemodelization, we can trivially test GPT-OSS fo...
2025-08-13T01:20:10
https://x.com/jxmnop/status/1955436067353502083
obvithrowaway34434
x.com
1970-01-01T00:00:00
0
{}
1mor1bd
false
null
t3_1mor1bd
/r/LocalLLaMA/comments/1mor1bd/someone_just_extracted_the_base_model_from_gptoss/
false
false
default
239
null
Apple users: Unsloth's quants could be coming to MLX - if we show interest
123
As title. [yoracale "Working on it we have Macs now!" No\_Conversation9561 "will there be UD MLX quants?" yoracale "Oh maybe if demand is more!"](https://www.reddit.com/r/unsloth/comments/1mlsoar/upcoming_mlx_support_news/) If you're interested in MLX UD quants - please show your interest.
2025-08-13T00:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1moqhvf/apple_users_unsloths_quants_could_be_coming_to/
Bus9917
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moqhvf
false
null
t3_1moqhvf
/r/LocalLLaMA/comments/1moqhvf/apple_users_unsloths_quants_could_be_coming_to/
false
false
self
123
null
I built a one stop Al powered research and study solution
0
I was tired of struggling with boring textbooks. So I built the ultimate Al-powered study weapon - and 10,000+ students are already using it.NexNotes Al is an Al-powered tool that helps you streamline your study and learning process. With a suite of features including mind maps, study plans, flowcharts, summaries, and ...
2025-08-13T00:55:18
https://nexnotes-ai.pages.dev
not_banned-1093
nexnotes-ai.pages.dev
1970-01-01T00:00:00
0
{}
1moqhg4
false
null
t3_1moqhg4
/r/LocalLLaMA/comments/1moqhg4/i_built_a_one_stop_al_powered_research_and_study/
false
false
default
0
null
KittenTTS on CPU
18
KittenTTS on RPi5 CPU. Very impressive so far. * Some things I noticed, adding a space at the end of the sentence prevents the voice from cutting off at the end. * Trying all the voices, voice-5-f, voice-3-m, voice-4-m seem to be the most natural sounding. * Generation speed is not too bad, 1-3 seconds depending on ...
2025-08-13T00:50:06
https://v.redd.it/rzedbasfpoif1
ranoutofusernames__
v.redd.it
1970-01-01T00:00:00
0
{}
1moqdfu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rzedbasfpoif1/DASHPlaylist.mpd?a=1757638222%2CZjdjMGEwNDcyYzQ1ODU4Y2JlOTZkMjMzYjk1MmViMjRiNDVhNzcxMTk1NmViYTMwYjM1YmU5YTNkYWJlMTUzNA%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/rzedbasfpoif1/DASH_1080.mp4?source=fallback', 'ha...
t3_1moqdfu
/r/LocalLLaMA/comments/1moqdfu/kittentts_on_cpu/
false
false
https://external-preview…27c53d5ce91cb189
18
{'enabled': False, 'images': [{'id': 'bXY3ajgycWZwb2lmMUIyUS6iPG0HizwrafJsvYarGkvOxfEddHbZYuE3lZJV', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bXY3ajgycWZwb2lmMUIyUS6iPG0HizwrafJsvYarGkvOxfEddHbZYuE3lZJV.png?width=108&crop=smart&format=pjpg&auto=webp&s=319ed85f45e2267a8f26be7b6af3c0637a70...
LM Studio 0.3.23
65
Opencode testing right now is working without any tool failures. Huge win.
2025-08-13T00:36:34
https://lmstudio.ai/blog/lmstudio-v0.3.23
sleepingsysadmin
lmstudio.ai
1970-01-01T00:00:00
0
{}
1moq2wh
false
null
t3_1moq2wh
/r/LocalLLaMA/comments/1moq2wh/lm_studio_0323/
false
false
default
65
{'enabled': False, 'images': [{'id': 'zP98hWqmZu7rI92YGtSTK2E6AnhmDYmDAkiErXA8_Qk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zP98hWqmZu7rI92YGtSTK2E6AnhmDYmDAkiErXA8_Qk.png?width=108&crop=smart&auto=webp&s=e6789e1f17961f53af2ba9f6e3aa332b18b08d6f', 'width': 108}, {'height': 113, 'url': 'h...
Fine Tuning on Mi50/Mi60 (under $300 budget) via Unsloth
5
Hi guys: I am having trouble wrapping my head around the requirements for fine tuning. Can I use 2xMi50 @ 32 GB each for fine tuning via unsloth a qwen3:32B model with QLoRA? I don’t care for FP16/BF16 as my use case is for my RAG App. Current LLMs lack the training for my industry and I want to train it for it. My...
2025-08-13T00:25:53
https://www.reddit.com/r/LocalLLaMA/comments/1mopubv/fine_tuning_on_mi50mi60_under_300_budget_via/
exaknight21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mopubv
false
null
t3_1mopubv
/r/LocalLLaMA/comments/1mopubv/fine_tuning_on_mi50mi60_under_300_budget_via/
false
false
self
5
null
I need help asap
1
[removed]
2025-08-13T00:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1mopjhx/i_need_help_asap/
CarefulFish1575
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mopjhx
false
null
t3_1mopjhx
/r/LocalLLaMA/comments/1mopjhx/i_need_help_asap/
false
false
self
1
null
a new benchmark for generative graphics and LLMs, please submit some votes!
3
2025-08-12T23:57:00
https://ggbench.com
toisanji
ggbench.com
1970-01-01T00:00:00
0
{}
1mop7ab
false
null
t3_1mop7ab
/r/LocalLLaMA/comments/1mop7ab/a_new_benchmark_for_generative_graphics_and_llms/
false
false
default
3
null
TPS math when using gpu and cpu
1
Hello, I’m trying to wrap my head around the math of predicting token per second on certain builds. If I wanted to run Qwen3 235b-a22b at Q4 using either ollama or lmstudio on a system with an rtx pro 6000 96gb vram and 192gb ram (4each DDR5-5600 48GB) with 32k context what kind of TPS would I see? Online calculators...
2025-08-12T23:56:29
https://www.reddit.com/r/LocalLLaMA/comments/1mop6w8/tps_math_when_using_gpu_and_cpu/
ProfessorCentaur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mop6w8
false
null
t3_1mop6w8
/r/LocalLLaMA/comments/1mop6w8/tps_math_when_using_gpu_and_cpu/
false
false
self
1
null
VoltAPI - AI API
1
🚀 Free & paid Discord AI API — chat completions with GPT-4.1, Opus, Claude Sonnet-4, “GPT-5” (where available), and more → join: [https://discord.gg/fwrb6zJm9n](https://discord.gg/fwrb6zJm9n) (and can be used for roocode/cline) documentation of this API > [https://docs.voltapi.online/](https://docs.voltapi.online/)
2025-08-12T23:43:25
https://www.reddit.com/r/LocalLLaMA/comments/1moow61/voltapi_ai_api/
PublicLocal1971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moow61
false
null
t3_1moow61
/r/LocalLLaMA/comments/1moow61/voltapi_ai_api/
false
false
self
1
null
Local model for short text rewriting
1
Hi! I have an RTX 3060Ti (8Gb VRAM), i9 12900k, 64Gb DDR5 RAM. I have been searching for a local LLM that I could use to rewrite short messages (one A4 page at most) by changing the tone to polite and professional. 99% of the time the messages will be around half of an A4 page. Basically, a model that will use aroun...
2025-08-12T23:04:49
https://www.reddit.com/r/LocalLLaMA/comments/1monzib/local_model_for_short_text_rewriting/
began_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1monzib
false
null
t3_1monzib
/r/LocalLLaMA/comments/1monzib/local_model_for_short_text_rewriting/
false
false
self
1
null
Setup for dictation / voice control via local LLM on Linux/AMD?
2
*Disclaimer: I'm a bloody newb, sorry in advance.* For health reasons, I'd really like to reduce the amount of typing I have to do, but conventional dictation / speech recognition software never worked for me. And even if it did, it doesn't exist for Linux. Imagine my surprise when I tried voice input on Gemini the ot...
2025-08-12T22:53:29
https://www.reddit.com/r/LocalLLaMA/comments/1monpdx/setup_for_dictation_voice_control_via_local_llm/
fallenguru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1monpdx
false
null
t3_1monpdx
/r/LocalLLaMA/comments/1monpdx/setup_for_dictation_voice_control_via_local_llm/
false
false
self
2
null
Woah. Letta vs Mem0. (For AI memory nerds)
334
I’m an absolute AI memory nerd, and have probably read every proposal made about memory, and demoed virtually all of the professional solutions out there. But I’m absolutely stunned to see Letta basically call out Mem0 like a WWE feud. To be clear: I do not have any kind of affiliation with any memory company (beyond m...
2025-08-12T22:34:04
https://i.redd.it/8sl96y461oif1.jpeg
LoveMind_AI
i.redd.it
1970-01-01T00:00:00
0
{}
1mon8it
false
null
t3_1mon8it
/r/LocalLLaMA/comments/1mon8it/woah_letta_vs_mem0_for_ai_memory_nerds/
false
false
default
334
{'enabled': True, 'images': [{'id': '8sl96y461oif1', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/8sl96y461oif1.jpeg?width=108&crop=smart&auto=webp&s=a8c5acdcb3efa459e9f1cd07c86c29bd8ee0cfb1', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/8sl96y461oif1.jpeg?width=216&crop=smart&auto=...
My post for LLM memory package got removed by Vercel, it indicates their concern of potential competence...
0
[https://github.com/GeLi2001/Memoer](https://github.com/GeLi2001/Memoer)
2025-08-12T22:32:55
https://i.redd.it/670ugugq0oif1
JadedBlackberry1804
i.redd.it
1970-01-01T00:00:00
0
{}
1mon7jl
false
null
t3_1mon7jl
/r/LocalLLaMA/comments/1mon7jl/my_post_for_llm_memory_package_got_removed_by/
false
false
default
0
null
My post for LLM memory package got removed by Vercel, it indicates their concern of potential competence...
0
[https://github.com/GeLi2001/Memoer](https://github.com/GeLi2001/Memoer)
2025-08-12T22:32:53
https://i.redd.it/670ugugq0oif1.png
JadedBlackberry1804
i.redd.it
1970-01-01T00:00:00
0
{}
1mon7iu
false
null
t3_1mon7iu
/r/LocalLLaMA/comments/1mon7iu/my_post_for_llm_memory_package_got_removed_by/
false
false
default
0
{'enabled': True, 'images': [{'id': '670ugugq0oif1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/670ugugq0oif1.png?width=108&crop=smart&auto=webp&s=81f1a8beab4155c93ad94f62774f5089f3c5a900', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/670ugugq0oif1.png?width=216&crop=smart&auto=webp...
Community Input
0
Hey Everyone, I am building my startup, and I need your input if you have ever worked with RAG! [https://forms.gle/qWBnJS4ZhykY8fyE8](https://forms.gle/qWBnJS4ZhykY8fyE8)
2025-08-12T22:29:48
https://www.reddit.com/r/LocalLLaMA/comments/1mon4q5/community_input/
NikhilAeturi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mon4q5
false
null
t3_1mon4q5
/r/LocalLLaMA/comments/1mon4q5/community_input/
false
false
self
0
null
Tutorial: Open WebUI and llama-swap works great together! Demo of setup, model swapping and activity monitoring.
21
A few people were asking yesterday if Open WebUI works with llama-swap. Short answer: Yes, and it's great! (imho) So I wanted to make a video of the setup and usage. Today was my my first time installing Open WebUI and my first time connecting it to llama-swap. I've been using Librechat for a long time but I think I'l...
2025-08-12T22:24:40
https://v.redd.it/xrrwm6q0vnif1
No-Statement-0001
/r/LocalLLaMA/comments/1mon08l/tutorial_open_webui_and_llamaswap_works_great/
1970-01-01T00:00:00
0
{}
1mon08l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xrrwm6q0vnif1/DASHPlaylist.mpd?a=1757759085%2CZjZkN2E0NmE1NmM4MTIwNjA2MWM3MDRiNmNlYTNjYzQ1OTBmZjg5ZmYxMjNiMjdjNTFhOTNkODQ2ZmQ4OTlkMg%3D%3D&v=1&f=sd', 'duration': 358, 'fallback_url': 'https://v.redd.it/xrrwm6q0vnif1/DASH_1080.mp4?source=fallback', '...
t3_1mon08l
/r/LocalLLaMA/comments/1mon08l/tutorial_open_webui_and_llamaswap_works_great/
false
false
https://external-preview…a0b3e2d35cca5fa0
21
{'enabled': False, 'images': [{'id': 'bnFqYWc1cTB2bmlmMUYwc6My6NsoFPbVqowBPHfEHm2mF2J7qt4_c2zAJzh5', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/bnFqYWc1cTB2bmlmMUYwc6My6NsoFPbVqowBPHfEHm2mF2J7qt4_c2zAJzh5.png?width=108&crop=smart&format=pjpg&auto=webp&s=12143706bf7adf85b2585c9a8a15c1456ac9f...
Image Generation
0
For generating basic but correct images which open source model is best? I have tried just draw 4 apples using sdxl, I got 5 apples or infinity apples etc. The accuracy is important for me. What do you suggest ?
2025-08-12T22:18:41
https://www.reddit.com/r/LocalLLaMA/comments/1momv05/image_generation/
Usual-Beautiful8865
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1momv05
false
null
t3_1momv05
/r/LocalLLaMA/comments/1momv05/image_generation/
false
false
self
0
null
Built an LM ChatBot App
0
For those familiar with silly tavern: I created my own app, it still a work in progress but coming along nicely. Check it out its free but you do have to provide your own api keys. https://schoolhouseai.com/
2025-08-12T22:15:10
https://www.reddit.com/r/LocalLLaMA/comments/1momrum/built_an_lm_chatbot_app/
Pircest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1momrum
false
null
t3_1momrum
/r/LocalLLaMA/comments/1momrum/built_an_lm_chatbot_app/
false
false
self
0
{'enabled': False, 'images': [{'id': 'R-uadLdXXqJ0AhmchkdN-YexFmAC6VEFN2T6tM35HV0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/R-uadLdXXqJ0AhmchkdN-YexFmAC6VEFN2T6tM35HV0.jpeg?width=108&crop=smart&auto=webp&s=f46e0f3ea720017ef7b676921753f6d3e790e5ba', 'width': 108}, {'height': 216, 'url': ...
Interested in Information Geometry?
1
Sooooo.... while everyone has been obsessed with making better and cooler models (which I am thankful for, some major beasts dropped lately), I have been working on a different problem. The one that has to do with data. How do we get more from what we already have? Can our current models get better by just giving them ...
2025-08-12T22:11:38
https://www.reddit.com/r/LocalLLaMA/comments/1momorb/interested_in_information_geometry/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1momorb
false
null
t3_1momorb
/r/LocalLLaMA/comments/1momorb/interested_in_information_geometry/
false
false
self
1
null
UIGEN Team is looking for support
61
Hey everyone! I'm speaking on behalf of the UIGEN team (some of you might know us from these models: [https://huggingface.co/Tesslate/UIGEN-X-32B-0727](https://huggingface.co/Tesslate/UIGEN-X-32B-0727) ) and similar other UI models, with a few of them trending on the front page of Huggingface! Our mission was simple, w...
2025-08-12T21:58:04
https://www.reddit.com/r/LocalLLaMA/comments/1momciv/uigen_team_is_looking_for_support/
United-Rush4073
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1momciv
false
null
t3_1momciv
/r/LocalLLaMA/comments/1momciv/uigen_team_is_looking_for_support/
false
false
self
61
{'enabled': False, 'images': [{'id': 'Tc8Xxcc-vUH3Yw1yQLsERirgCzhaSjsyJC37vsrUECA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tc8Xxcc-vUH3Yw1yQLsERirgCzhaSjsyJC37vsrUECA.png?width=108&crop=smart&auto=webp&s=e36b257be4acd94fbc645e5ddaf35cd603d66e95', 'width': 108}, {'height': 116, 'url': 'h...
The SERVE-AI-VAL Box - I built a portable local AI-in-a-box that runs off solar & hand crank power for under $300
223
TL:DR I made an offline, off-grid, self-powered, locally-hosted AI server using Google AI Edge Gallery, with Gemma3:4b running on an XREAL Beam Pro. It’s powered by a $50 MQOUNY solar / hand crank / USB power bank. I used heavy duty 3M Velcro-like picture hanging strips to hold it all together. I’m storing it all in a ...
2025-08-12T21:49:20
https://v.redd.it/40yzby3mrnif1
Porespellar
/r/LocalLLaMA/comments/1mom4qm/the_serveaival_box_i_built_a_portable_local/
1970-01-01T00:00:00
0
{}
1mom4qm
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/40yzby3mrnif1/DASHPlaylist.mpd?a=1757756969%2CZjRkZWNkMzE3MTM4YjA0YzcyN2IxY2M3NDVhMzk0OGUxZWJmMzAyMTE2ODdhNTg1N2M2YWU2OGQ1NThkMTg4NA%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/40yzby3mrnif1/DASH_1080.mp4?source=fallback', 'h...
t3_1mom4qm
/r/LocalLLaMA/comments/1mom4qm/the_serveaival_box_i_built_a_portable_local/
false
false
https://external-preview…0dde86c53d758992
223
{'enabled': False, 'images': [{'id': 'NHFweWRmMW1ybmlmMcFLkpQep1-CmSQZ5gYPoLq4j-dB85f-NSL82e-hnm-C', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHFweWRmMW1ybmlmMcFLkpQep1-CmSQZ5gYPoLq4j-dB85f-NSL82e-hnm-C.png?width=108&crop=smart&format=pjpg&auto=webp&s=4469fb60629b6443f33761a74371fc6583093...
Best coder LLM that has vision model?
2
Hey all, I'm trying to use a LLM that works well with coding but also has image recognition, so I can submit a screenshot as part of the RAG to create whatever it is I need to create. Right now I'm using Unsloth's Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_XL which works amazing, however, I can't give it an image to wo...
2025-08-12T21:46:19
https://www.reddit.com/r/LocalLLaMA/comments/1mom1x7/best_coder_llm_that_has_vision_model/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mom1x7
false
null
t3_1mom1x7
/r/LocalLLaMA/comments/1mom1x7/best_coder_llm_that_has_vision_model/
false
false
self
2
null
qwen base models are weird
8
it really feels like qwen's base models since 2.5 are trained like instruct models every time i input something, it always ends up looking something like it comes from instruction fine tuning data why do they still call it "base" when an assistant appears out of nowhere??? [Qwen.Qwen3-30B-A3B-Base.Q5\_K\_M.gguf; aut...
2025-08-12T21:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1molwjq/qwen_base_models_are_weird/
shockwaverc13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1molwjq
false
null
t3_1molwjq
/r/LocalLLaMA/comments/1molwjq/qwen_base_models_are_weird/
false
false
https://b.thumbs.redditm…YG2UXFpIXaXU.jpg
8
null
Beginner: Compatibility with old GPUs; Best "good enough" specs; Local vs. cloud; Software choice
1
Hi Reddit. I've experimented with *GPT4all* and *LMStudio* on a laptop with Nvidia 3060 Mobile and would like to install an NVIDIA GPU in my desktop PC with AMD Ryzen 9 7900X and NVMe drive to do more. **PURPOSE** • Required: Scan through text files to retrieve specific info • Required: Summarize text size of a stan...
2025-08-12T21:11:07
https://www.reddit.com/r/LocalLLaMA/comments/1mol4ze/beginner_compatibility_with_old_gpus_best_good/
smartfon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mol4ze
false
null
t3_1mol4ze
/r/LocalLLaMA/comments/1mol4ze/beginner_compatibility_with_old_gpus_best_good/
false
false
self
1
null
Fuck Groq, Amazon, Azure, Nebius, fucking scammers
310
2025-08-12T21:04:34
https://i.redd.it/76rkrod6lnif1.jpeg
Charuru
i.redd.it
1970-01-01T00:00:00
0
{}
1mokyp0
false
null
t3_1mokyp0
/r/LocalLLaMA/comments/1mokyp0/fuck_groq_amazon_azure_nebius_fucking_scammers/
false
false
default
310
{'enabled': True, 'images': [{'id': '76rkrod6lnif1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/76rkrod6lnif1.jpeg?width=108&crop=smart&auto=webp&s=8cae31ce760a5f4a2189b0af65b369b394ae7b73', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/76rkrod6lnif1.jpeg?width=216&crop=smart&auto=w...
Why is everyone suddenly loving gpt-oss today?
251
Everyone was hating on it and one fine day we got this.
2025-08-12T21:03:11
https://www.reddit.com/r/LocalLLaMA/comments/1mokxdv/why_is_everyone_suddenly_loving_gptoss_today/
Pro-editor-1105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mokxdv
false
null
t3_1mokxdv
/r/LocalLLaMA/comments/1mokxdv/why_is_everyone_suddenly_loving_gptoss_today/
false
false
self
251
null
What would you run if had access to Azure GPU cluster and $100k in credits?
0
The cluster is optimized for large-scale training, fine-tuning, and inference. What would you run?
2025-08-12T20:58:20
https://www.reddit.com/r/LocalLLaMA/comments/1moksok/what_would_you_run_if_had_access_to_azure_gpu/
yellow_golf_ball
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moksok
false
null
t3_1moksok
/r/LocalLLaMA/comments/1moksok/what_would_you_run_if_had_access_to_azure_gpu/
false
false
self
0
null
REINFORCE++-baseline is all you need in RLVR
0
# What is REINFORCE++-baseline? Simply put, REINFORCE++-baseline ([https://arxiv.org/abs/2501.03262](https://arxiv.org/abs/2501.03262)) replaces the Local std in GRPO with the Global batch std / Global advantage normalization, and uses the K2 KL estimator to compute the KL Loss. Because global batch std is significant...
2025-08-12T20:46:08
https://www.reddit.com/r/LocalLLaMA/comments/1mokh0k/reinforcebaseline_is_all_you_need_in_rlvr/
seventh_day123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mokh0k
false
null
t3_1mokh0k
/r/LocalLLaMA/comments/1mokh0k/reinforcebaseline_is_all_you_need_in_rlvr/
false
false
https://b.thumbs.redditm…6BTuz4Jaz8yo.jpg
0
null
Why is Qwen3-4B thinking performing so badly for me on HumanEval
1
I try to download a variety of publishers and quants for models that I will run locally so I can see what works best for me. One of the tests I run is the evalplus humaneval. For whatever reason, the Qwen3 thinking models just perform so poorly compared to all the other qwen3 models I have. I am using LMStudio and f...
2025-08-12T20:43:43
https://i.redd.it/cuinoosw9nif1.png
Snorty-Pig
i.redd.it
1970-01-01T00:00:00
0
{}
1mokep3
false
null
t3_1mokep3
/r/LocalLLaMA/comments/1mokep3/why_is_qwen34b_thinking_performing_so_badly_for/
false
false
default
1
{'enabled': True, 'images': [{'id': 'cuinoosw9nif1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/cuinoosw9nif1.png?width=108&crop=smart&auto=webp&s=3c2cfc3962dbf377d603a41e9740f24321a3249b', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/cuinoosw9nif1.png?width=216&crop=smart&auto=web...
Purchase Advice (RTX Pro 6000 96GB vs other options)
2
Looking for advice on where to best spend my money/to not waste it. I currently have one 3090 that I've been using for a few months of local inference and it's been good but I'm constantly limited by my VRAM. I was considering a 4x3090 setup but the more I think about it the more I feel a single 96gb card is the better...
2025-08-12T20:41:42
https://www.reddit.com/r/LocalLLaMA/comments/1mokctk/purchase_advice_rtx_pro_6000_96gb_vs_other_options/
trace_theory
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mokctk
false
null
t3_1mokctk
/r/LocalLLaMA/comments/1mokctk/purchase_advice_rtx_pro_6000_96gb_vs_other_options/
false
false
self
2
null
Best setup for local cursor replacement?
2
Trying to organize a workshop for people but don’t want to ask them to pay for Claude Code, Cursor, etc… and I’m trying to consider that not everyone will have a MacBook M4 I’m playing with Void as the IDE, for the model it seems like best options would either be free endpoints on OpenRouter or maybe CodeLlama? I’m no...
2025-08-12T20:41:00
https://www.reddit.com/r/LocalLLaMA/comments/1mokc5e/best_setup_for_local_cursor_replacement/
themessymiddle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mokc5e
false
null
t3_1mokc5e
/r/LocalLLaMA/comments/1mokc5e/best_setup_for_local_cursor_replacement/
false
false
self
2
null
Open alternative to gpt-4o and gpt-4.5?
0
Hi, My friend has been using Gpt-4o and Gpt-4.5 through ChatGPT and they've been quite happy about it. They're mad now that OpenAI released GPT-5 and took away the older models. They reached out to me asking about good alternatives to 4o and 4.5 So which of the open models is closest to 4o and 4.5? They mainly use ...
2025-08-12T20:40:19
https://www.reddit.com/r/LocalLLaMA/comments/1mokbhl/open_alternative_to_gpt4o_and_gpt45/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mokbhl
false
null
t3_1mokbhl
/r/LocalLLaMA/comments/1mokbhl/open_alternative_to_gpt4o_and_gpt45/
false
false
self
0
null
Drop-in Voice App Control for iOS with Local Models
1
Put together an iOS example that turns voice commands into app events using a simple audio graph. It handles mic input, voice activity detection, and speech-to-text (tested with Whisper, but works with other STT). The output is just events your app can respond to — could be local LLaMA agents, shortcuts, whatever. ...
2025-08-12T20:39:39
https://github.com/switchboard-sdk/voice-app-control-example-ios
trolleycrash
github.com
1970-01-01T00:00:00
0
{}
1mokasy
false
null
t3_1mokasy
/r/LocalLLaMA/comments/1mokasy/dropin_voice_app_control_for_ios_with_local_models/
false
false
default
1
{'enabled': False, 'images': [{'id': 'wdV3pICSh9PxX1eYgTCgzy8SCh-pisbrNPLDv6YojHw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wdV3pICSh9PxX1eYgTCgzy8SCh-pisbrNPLDv6YojHw.png?width=108&crop=smart&auto=webp&s=94c3f9992a140f58ae5f3caf685a49047163a156', 'width': 108}, {'height': 108, 'url': 'h...
Which OCR provider do you recommend?
0
What’s the best OCR provider you’d recommend? I’m looking for an alternative to Chunkr — ideally one that offers a free plan so I can try the service first
2025-08-12T20:30:22
https://www.reddit.com/r/LocalLLaMA/comments/1mok1w4/which_ocr_provider_do_you_recommend/
_coder23t8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mok1w4
false
null
t3_1mok1w4
/r/LocalLLaMA/comments/1mok1w4/which_ocr_provider_do_you_recommend/
false
false
self
0
null
Llama Stack Office Hours with Meta
3
Hey everyone, Wanted to share that Meta's team is doing a technical deep dive on Llama Stack this **Thursday (Aug 14) at noon** ET with the **AI Alliance.** What's interesting: Kai Wu (Partner Engineer at Meta) is showing how they deploy Llama models across different environments with one framework.  They're covering...
2025-08-12T20:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1mojr7s/llama_stack_office_hours_with_meta/
AI_Alliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mojr7s
false
null
t3_1mojr7s
/r/LocalLLaMA/comments/1mojr7s/llama_stack_office_hours_with_meta/
false
false
self
3
{'enabled': False, 'images': [{'id': 'QGZle0oRKuiQJMc9YaUoWO9-wUx1dt4YpRIF_qy4L2M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QGZle0oRKuiQJMc9YaUoWO9-wUx1dt4YpRIF_qy4L2M.png?width=108&crop=smart&auto=webp&s=c805d272ee49218494431b73e4c1f3ab40959016', 'width': 108}, {'height': 108, 'url': 'h...
More parameters and less quants or less parameters and more quants?
3
For example there are gemma, gpt oss, mistral and qwen models with 20-30b parameters, though my pc power is only capable to hande 1 or 2 quants versions of those models. Meanwhile, i can use gemma, qwen and deepseek with less parameters (8-14) and higher quantisation (5-8). Which would be better, if there's any differe...
2025-08-12T20:12:31
https://www.reddit.com/r/LocalLLaMA/comments/1mojklk/more_parameters_and_less_quants_or_less/
Imaginary_Bread9711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mojklk
false
null
t3_1mojklk
/r/LocalLLaMA/comments/1mojklk/more_parameters_and_less_quants_or_less/
false
false
self
3
null
What can i run at above 10 TPS
20
2025-08-12T19:50:17
https://i.redd.it/oxz20dbx7nif1.jpeg
TechLevelZero
i.redd.it
1970-01-01T00:00:00
0
{}
1moiz1p
false
null
t3_1moiz1p
/r/LocalLLaMA/comments/1moiz1p/what_can_i_run_at_above_10_tps/
false
false
default
20
{'enabled': True, 'images': [{'id': 'oxz20dbx7nif1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/oxz20dbx7nif1.jpeg?width=108&crop=smart&auto=webp&s=ad7393f8d533e649366aa4efb06565591c2162be', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/oxz20dbx7nif1.jpeg?width=216&crop=smart&auto=...
Building a web search engine from scratch in two months with 3 billion neural embeddings
1
2025-08-12T19:43:42
https://blog.wilsonl.in/search-engine/
ChiliPepperHott
blog.wilsonl.in
1970-01-01T00:00:00
0
{}
1moisqc
false
null
t3_1moisqc
/r/LocalLLaMA/comments/1moisqc/building_a_web_search_engine_from_scratch_in_two/
false
false
default
1
null
LLMs’ reasoning abilities are a “brittle mirage”
62
Probably not a surprise to anyone who has read the reasoning traces. I'm still hoping that AIs can crack true reasoning, but I'm not sure if the current architectures are enough to get us there.
2025-08-12T19:35:45
https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/
DeltaSqueezer
arstechnica.com
1970-01-01T00:00:00
0
{}
1moil3f
false
null
t3_1moil3f
/r/LocalLLaMA/comments/1moil3f/llms_reasoning_abilities_are_a_brittle_mirage/
false
false
default
62
{'enabled': False, 'images': [{'id': 'KeNgUIkCyuq-82qF7JOu0fDZZcus9vvW0waiRX3EGec', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KeNgUIkCyuq-82qF7JOu0fDZZcus9vvW0waiRX3EGec.jpeg?width=108&crop=smart&auto=webp&s=49f5ec57cafcf1c8e6cbeb58cc03104faa761e05', 'width': 108}, {'height': 121, 'url': '...
LLMs to imitate your texting patterns?
1
As a personal experiment, I've been looking for an LLM that could become me, in a sense? Something that could conceivably trick someone into thinking they were talking to me. I have a couple of sample chats from people who agreed to help me, but even with all of that, most LLMs just fail miserably when I ask them to i...
2025-08-12T19:23:58
https://www.reddit.com/r/LocalLLaMA/comments/1moi9nw/llms_to_imitate_your_texting_patterns/
-Smothie-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moi9nw
false
null
t3_1moi9nw
/r/LocalLLaMA/comments/1moi9nw/llms_to_imitate_your_texting_patterns/
false
false
self
1
null
1/4 Future Proof Rig. How much RAM & GPU needed for 250B+ Models?
0
Trying to build 1/4 Future Proof Rig. How much RAM & GPU needed for 250B+ Models? **Use cases** : Text generation, Coding, Content creation, Writing, Audio generation, Image generation, Video generation, Learning, etc., Below are the models I want to use: |Model (GGUF)|Quant - Size|Context Length| |:-|:-|:-| |GLM-4....
2025-08-12T19:19:45
https://www.reddit.com/r/LocalLLaMA/comments/1moi5h8/14_future_proof_rig_how_much_ram_gpu_needed_for/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moi5h8
false
null
t3_1moi5h8
/r/LocalLLaMA/comments/1moi5h8/14_future_proof_rig_how_much_ram_gpu_needed_for/
false
false
self
0
null
Power Efficient Local AI using USB 4 egpus?
4
I have been thinking about moving some of my inference GPUs from direct PCIE to USB4. My current setup consists of one 5090, two 3090s and one M40 24GB, with my usage the peak power draw isnt a problem, I am more worried about the idle power consumption each GPU idles around 15 to 30W which can add up to 120W constant ...
2025-08-12T18:51:00
https://www.reddit.com/r/LocalLLaMA/comments/1mohdbr/power_efficient_local_ai_using_usb_4_egpus/
MaruluVR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mohdbr
false
null
t3_1mohdbr
/r/LocalLLaMA/comments/1mohdbr/power_efficient_local_ai_using_usb_4_egpus/
false
false
self
4
null
DSPy BAML output format increases reliability of structured outputs by ~5% for smaller models vs JSON Schema
6
https://preview.redd.it/…ol-calling APIs.
2025-08-12T18:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1moh7zx/dspy_baml_output_format_increases_reliability_of/
fluxwave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moh7zx
false
null
t3_1moh7zx
/r/LocalLLaMA/comments/1moh7zx/dspy_baml_output_format_increases_reliability_of/
false
false
https://b.thumbs.redditm…gSwZDN5vwWBc.jpg
6
{'enabled': False, 'images': [{'id': '1Zp6GTci69yOkEbyQHZFHBJF89zbEdrOI3hN753vNls', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1Zp6GTci69yOkEbyQHZFHBJF89zbEdrOI3hN753vNls.png?width=108&crop=smart&auto=webp&s=49cf0df87667348a653f6bf43de26543aa1b3cc3', 'width': 108}, {'height': 108, 'url': 'h...
OpenAI GPT-OSS-120b is an excellent model
188
I'm kind of blown away right now. I downloaded this model not expecting much, as I am an avid fan of the qwen3 family (particularly, the new qwen3-235b-2507 variants). But this OpenAI model is really, really good. For coding, it has nailed just about every request I've sent its way, and that includes things qwen3-235...
2025-08-12T18:35:17
https://www.reddit.com/r/LocalLLaMA/comments/1mogxpr/openai_gptoss120b_is_an_excellent_model/
xxPoLyGLoTxx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mogxpr
false
null
t3_1mogxpr
/r/LocalLLaMA/comments/1mogxpr/openai_gptoss120b_is_an_excellent_model/
false
false
self
188
null
If Grok-2 is open sourced, what should users do next?
0
Can you answer correctly?
2025-08-12T18:31:31
https://www.reddit.com/r/LocalLLaMA/comments/1mogtwf/if_grok2_is_open_sourced_what_should_users_do_next/
Brilliant_Stock_5137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mogtwf
false
null
t3_1mogtwf
/r/LocalLLaMA/comments/1mogtwf/if_grok2_is_open_sourced_what_should_users_do_next/
false
false
self
0
null
Self-host open-source LLM agent sandbox on your own cloud
0
2025-08-12T18:25:37
https://blog.skypilot.co/skypilot-llm-sandbox/
alex000kim
blog.skypilot.co
1970-01-01T00:00:00
0
{}
1mogo0o
false
null
t3_1mogo0o
/r/LocalLLaMA/comments/1mogo0o/selfhost_opensource_llm_agent_sandbox_on_your_own/
false
false
default
0
{'enabled': False, 'images': [{'id': 'a9RGthwTzMaZYxjCjAZT6Z9njK_dAwDFBNLK994GYRM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a9RGthwTzMaZYxjCjAZT6Z9njK_dAwDFBNLK994GYRM.png?width=108&crop=smart&auto=webp&s=707ffb8164d8554d196d6381642771b478e0543f', 'width': 108}, {'height': 121, 'url': 'h...
Why is the bnb quantization of gpt-oss-20b bigger than the actual model
2
So I just stumbled across the fact that the the model quantized model [https://huggingface.co/unsloth/gpt-oss-20b-bnb-4bit/tree/main](https://huggingface.co/unsloth/gpt-oss-20b-bnb-4bit/tree/main) is way bigger than the original model https://huggingface.co/openai/gpt-oss-20b/tree/main. I genuinely have no explanatio...
2025-08-12T18:11:05
https://www.reddit.com/r/LocalLLaMA/comments/1mog9nk/why_is_the_bnb_quantization_of_gptoss20b_bigger/
StayStonk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mog9nk
false
null
t3_1mog9nk
/r/LocalLLaMA/comments/1mog9nk/why_is_the_bnb_quantization_of_gptoss20b_bigger/
false
false
self
2
{'enabled': False, 'images': [{'id': 'UE6H-mSVMqLDtcVmikoz-1mbtVuphBLC_II46F0rZO4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UE6H-mSVMqLDtcVmikoz-1mbtVuphBLC_II46F0rZO4.png?width=108&crop=smart&auto=webp&s=4249b623c83de585bd32c6d21acf7ef2c0b61b39', 'width': 108}, {'height': 116, 'url': 'h...
Ollama new chat app and MLX
0
Does the new ollama download MLX models when it's installed on Mac? I really like that ollama has a web search, I don't understand why in all this time LMStudio hasn't implemented it? What do you guys use as your preferred web search tool? I did use perplexica for a while but I don't like having to constantly host ...
2025-08-12T18:10:28
https://www.reddit.com/r/LocalLLaMA/comments/1mog920/ollama_new_chat_app_and_mlx/
BalaelGios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mog920
false
null
t3_1mog920
/r/LocalLLaMA/comments/1mog920/ollama_new_chat_app_and_mlx/
false
false
self
0
null
GLM-4.5V model locally for computer use
71
On OSWorld-V, it scores 35.8% - beating UI-TARS-1.5, matching Claude-3.7-Sonnet-20250219, and setting SOTA for fully open-source computer-use models. Run it with Cua either: Locally via Hugging Face Remotely via OpenRouter Github : https://github.com/trycua Docs + examples: https://docs.trycua.com/docs/agent-sdk/sup...
2025-08-12T18:05:51
https://v.redd.it/i38zpvyapmif1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1mog4ep
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/i38zpvyapmif1/DASHPlaylist.mpd?a=1757613967%2COTliMDUzMTA2ZDhmZDY2OGMxNTU5NGU3ZTRlYzFiMzgyNmI1YTBiYWYzNTRkYmMxNGE3ZjNkN2Q4OTUxNWMwNQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/i38zpvyapmif1/DASH_720.mp4?source=fallback', 'ha...
t3_1mog4ep
/r/LocalLLaMA/comments/1mog4ep/glm45v_model_locally_for_computer_use/
false
false
https://external-preview…aaccb4270887e551
71
{'enabled': False, 'images': [{'id': 'NmN0MWhvb2FwbWlmMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/NmN0MWhvb2FwbWlmMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=108&crop=smart&format=pjpg&auto=webp&s=11bc711626d2ba7b743f89cdb2881c7ff737f...
Easily Accessing Reasoning Content of GPT-OSS across different providers?
0
Anyone else noticing how tricky it is to compare models across providers? I was running gpt-oss locally on Ollama and LM Studio and also a hosted version on Groq, but each provider was putting the reasoning content in different places in their response, even though they're all technically using the OpenAI Completions A...
2025-08-12T18:01:21
https://blog.mozilla.ai/standardized-reasoning-content-a-first-look-at-using-openais-gpt-oss-on-multiple-providers-using-any-llm/
river_otter412
blog.mozilla.ai
1970-01-01T00:00:00
0
{}
1mofzpz
false
null
t3_1mofzpz
/r/LocalLLaMA/comments/1mofzpz/easily_accessing_reasoning_content_of_gptoss/
false
false
default
0
{'enabled': False, 'images': [{'id': 'vhxrmlTTxqXimqQWgl9z94WChwX19Lv7i9a0pW7uyj4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/vhxrmlTTxqXimqQWgl9z94WChwX19Lv7i9a0pW7uyj4.jpeg?width=108&crop=smart&auto=webp&s=3ad3a8c3c15902badbbf17f1a4b4aafae1942f95', 'width': 108}, {'height': 162, 'url': '...
About to purchase the RTX Pro 6000 Blackwell MaxQ right now but I want to make a last minute inquiry before I do. How fast do the models run?
0
I just wanna temper my expectations before I get buyer's remorse. I've been looking online at different posts getting different benchmarks for different inference engines (vLLM, llama.cpp, ollama, etc.) and I'm kind of being a nervous wreck right now because I want to buy this card to get a speed boost for local models...
2025-08-12T17:57:21
https://www.reddit.com/r/LocalLLaMA/comments/1mofvmk/about_to_purchase_the_rtx_pro_6000_blackwell_maxq/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mofvmk
false
null
t3_1mofvmk
/r/LocalLLaMA/comments/1mofvmk/about_to_purchase_the_rtx_pro_6000_blackwell_maxq/
false
false
self
0
null
Spoiler, it's not unique: Lovable alternative - my USP is building it solely based on user feedback
0
I've used Cursor, Lovable, Bolt, & GitHub Copilot and for attempted serious projects, I always reached a point where I had to know how to code. I’m not really an expert dev, which slowed me down so much. Over the past few months, my brother and I built [Shipper.now](https://shipper.now/), a truly no-code app builder ...
2025-08-12T17:55:18
https://v.redd.it/v7hoqdwbnmif1
chdavidd
v.redd.it
1970-01-01T00:00:00
0
{}
1moftk5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/v7hoqdwbnmif1/DASHPlaylist.mpd?a=1757613333%2CM2ZjMjgxZGY2Mzg2NDU3Y2YzMjdiNDg0OGY1MjZiYmYxNDk2ZDM0ZjFmMGM3MmRiZmQ0NzZmMmFjZjhkZjhiOA%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/v7hoqdwbnmif1/DASH_1080.mp4?source=fallback', 'h...
t3_1moftk5
/r/LocalLLaMA/comments/1moftk5/spoiler_its_not_unique_lovable_alternative_my_usp/
false
false
spoiler
0
{'enabled': False, 'images': [{'id': 'dnk0OGdld2JubWlmMW9r91elTWKwPdG7HPit5eV5UZuMcmi9MP0pXcZJKat0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/dnk0OGdld2JubWlmMW9r91elTWKwPdG7HPit5eV5UZuMcmi9MP0pXcZJKat0.png?width=108&crop=smart&format=pjpg&auto=webp&s=4b081ee34083c8a3163409b417fe6074fde8f...
Gemini 2.5 Pro is surprisingly brittle and frustratingly unaware of its own actions
1
So I'm a research programmer, both working on and with LLMs for various tasks. I sometimes switch between different models if I'm struggling with a particularly difficult task and need some type of semantic search to look for a particular piece of code or setting which will solve an issue. I've found that when it come...
2025-08-12T17:50:39
https://www.reddit.com/r/LocalLLaMA/comments/1mofoyv/gemini_25_pro_is_surprisingly_brittle_and/
kaput__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mofoyv
false
null
t3_1mofoyv
/r/LocalLLaMA/comments/1mofoyv/gemini_25_pro_is_surprisingly_brittle_and/
false
false
self
1
null
Is it true uncensored versions of models perform better in benchmarks?
0
I keep hearing about how models are wasting so much thinking on policy as opposed to the right answer—and awhile ago I came across an uncensored model that claimed it performed better than its predecessor in benchmarks. With this knowledge shouldn't the first thing we be doing when a new open source model comes out to ...
2025-08-12T17:37:27
https://www.reddit.com/r/LocalLLaMA/comments/1mofbuz/is_it_true_uncensored_versions_of_models_perform/
Round_Ad_5832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mofbuz
false
null
t3_1mofbuz
/r/LocalLLaMA/comments/1mofbuz/is_it_true_uncensored_versions_of_models_perform/
false
false
self
0
null
Using LLaMA to Rate Real Estate Projects, Worth It?
0
I’m interning at a real estate agency and working with a CSV of 5,000 upcoming projects. I want to have an LLM read each project description and score the building material quality from 1–100. I’ll run llama3.1:8b (anything bigger is too slow at 15 tps). Has anyone done something similar, and was the data reliable?
2025-08-12T17:17:07
https://www.reddit.com/r/LocalLLaMA/comments/1moerut/using_llama_to_rate_real_estate_projects_worth_it/
Chemical_Elk7746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moerut
false
null
t3_1moerut
/r/LocalLLaMA/comments/1moerut/using_llama_to_rate_real_estate_projects_worth_it/
false
false
self
0
null
Qwen3 8B Q8_K_XL VS Qwen3 14B Q5_K_M
10
Hello everyone, this is my first post on Reddit :) I have never run any LLM model locally before. I have always used the API or chat versions from OpenAI and Google. Recently, for a relatively simple text processing task, I accidentally used over 8M input and 10M output tokens. This resulted in an unexpected financial...
2025-08-12T17:15:24
https://www.reddit.com/r/LocalLLaMA/comments/1moeq6p/qwen3_8b_q8_k_xl_vs_qwen3_14b_q5_k_m/
Normal-Phone7762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moeq6p
false
null
t3_1moeq6p
/r/LocalLLaMA/comments/1moeq6p/qwen3_8b_q8_k_xl_vs_qwen3_14b_q5_k_m/
false
false
self
10
null
GPT-5 Style Router, but for any LLM including local.
405
GPT-5 launched a few days ago, which essentially wraps different models underneath via a real-time router. In June, we published our [preference-aligned routing model](https://huggingface.co/katanemo/Arch-Router-1.5B) and [framework](https://github.com/katanemo/archgw) for developers so that they can build a unified ex...
2025-08-12T17:04:22
https://i.redd.it/vvlzu888emif1.png
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1moefc2
false
null
t3_1moefc2
/r/LocalLLaMA/comments/1moefc2/gpt5_style_router_but_for_any_llm_including_local/
false
false
default
405
{'enabled': True, 'images': [{'id': 'vvlzu888emif1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vvlzu888emif1.png?width=108&crop=smart&auto=webp&s=f29cd7379a463679550fbf7e84a3f7f3c2f66374', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/vvlzu888emif1.png?width=216&crop=smart&auto=web...
MCP Vulnerabilities Every Developer Should Know
98
I have been digging into the MCP implementations lately, especially around security and noticed some serious risks. # The Tool Description Injection Issue This happens when MCP servers hide malicious instructions inside tool descriptions that AI agents read. These descriptions go straight into the AI’s context. ...
2025-08-12T17:03:17
https://www.reddit.com/r/LocalLLaMA/comments/1moee82/mcp_vulnerabilities_every_developer_should_know/
anmolbaranwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moee82
false
null
t3_1moee82
/r/LocalLLaMA/comments/1moee82/mcp_vulnerabilities_every_developer_should_know/
false
false
self
98
{'enabled': False, 'images': [{'id': 'xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=108&crop=smart&auto=webp&s=ea8bd235ddec234f4ca95c3725a3bbd452bb6616', 'width': 108}, {'height': 216, 'url': '...
Drummer's Gemma 3 R1 27B/12B/4B v1 - A Thinking Gemma!
188
27B: [https://huggingface.co/TheDrummer/Gemma-3-R1-27B-v1](https://huggingface.co/TheDrummer/Gemma-3-R1-27B-v1) 12B: [https://huggingface.co/TheDrummer/Gemma-3-R1-12B-v1](https://huggingface.co/TheDrummer/Gemma-3-R1-12B-v1) 4B: [https://huggingface.co/TheDrummer/Gemma-3-R1-4B-v1](https://huggingface.co/TheDrummer/Gem...
2025-08-12T16:59:37
https://huggingface.co/TheDrummer/Gemma-3-R1-27B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1moeahb
false
null
t3_1moeahb
/r/LocalLLaMA/comments/1moeahb/drummers_gemma_3_r1_27b12b4b_v1_a_thinking_gemma/
false
false
https://external-preview…6d9f2beae076a8c6
188
{'enabled': False, 'images': [{'id': 'Cdc0fJRoo0tax05rGkDc_B2BuW-4G4E4XliXS6nqYRc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Cdc0fJRoo0tax05rGkDc_B2BuW-4G4E4XliXS6nqYRc.png?width=108&crop=smart&auto=webp&s=f3c8b5a9f911fd1c95d1e3d3268043a8bc805249', 'width': 108}, {'height': 116, 'url': 'h...
Kokoro vs Kokoro-ONNX for TTS CPU
6
Hi, Any advice on which of these is: 1. Faster 2. Less latency to first word under CPU only? I'd also appreciate, in general, what the advantage is of the ONNX version besides a smaller model size. Thank you!
2025-08-12T16:39:40
https://www.reddit.com/r/LocalLLaMA/comments/1modqv5/kokoro_vs_kokoroonnx_for_tts_cpu/
MLAWest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1modqv5
false
null
t3_1modqv5
/r/LocalLLaMA/comments/1modqv5/kokoro_vs_kokoroonnx_for_tts_cpu/
false
false
self
6
null
Google launches chess tournament for AI models
0
2025-08-12T16:32:46
https://sigma.world/news/google-launches-chess-tournament-for-ai-models/
JohannLoewen
sigma.world
1970-01-01T00:00:00
0
{}
1modk6f
false
null
t3_1modk6f
/r/LocalLLaMA/comments/1modk6f/google_launches_chess_tournament_for_ai_models/
false
false
default
0
{'enabled': False, 'images': [{'id': 'aLeV7DqvCyZx7EV3in8urKXNKqU5mI4fThOwP7Xi4SE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aLeV7DqvCyZx7EV3in8urKXNKqU5mI4fThOwP7Xi4SE.jpeg?width=108&crop=smart&auto=webp&s=f5f36d9866a4bba6b298cd3480196933e727d1b2', 'width': 108}, {'height': 121, 'url': '...
We need a Reasoning Effort standard (for benchmarking and reporting)
14
Benchmarking for reasoning LMs currently ignores the cost of thinking, so models can “win” by spending more tokens, retries, tool calls, and wall-clock time. This prevents you from actually comparing capability and spend, leads to irreproducible claims, and incentive perverse practices to overthink and resample until ...
2025-08-12T16:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1modio0/we_need_a_reasoning_effort_standard_for/
Balance-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1modio0
false
null
t3_1modio0
/r/LocalLLaMA/comments/1modio0/we_need_a_reasoning_effort_standard_for/
false
false
self
14
null
LLM Pigeon now has web search and it's not bad at all.
0
Hi guys! The new updates to the LLM pigeon companion apps is out and have a very improved web search functionality. For those who didn't catch my previous posts, LLM Pigeon is an iOS app that works with a companion MacOS app called LLM Pigeon Server. They are both free and open source. They collect no data (it's ju...
2025-08-12T16:25:17
https://v.redd.it/kbwzofnt6mif1
Valuable-Run2129
v.redd.it
1970-01-01T00:00:00
0
{}
1modcp8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kbwzofnt6mif1/DASHPlaylist.mpd?a=1757607930%2CNzQ2NTMxNDc1YzllYzg2NmY0NWE2OGE5ZjJlZmQzOGRmNTA4NjBmZGMxYTU1Mjg2ODJhYThhNDM1NmUwZjgzZQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/kbwzofnt6mif1/DASH_1080.mp4?source=fallback', 'h...
t3_1modcp8
/r/LocalLLaMA/comments/1modcp8/llm_pigeon_now_has_web_search_and_its_not_bad_at/
false
false
https://external-preview…49036a3e7e21e03c
0
{'enabled': False, 'images': [{'id': 'dml4dGhnbnQ2bWlmMf5eJ8EuK9DQov97FIr7-ySetGR97QfZZTQNEQ7ywA9Z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dml4dGhnbnQ2bWlmMf5eJ8EuK9DQov97FIr7-ySetGR97QfZZTQNEQ7ywA9Z.png?width=108&crop=smart&format=pjpg&auto=webp&s=07fae6e37e72edbd26841c7808fa467862bc5...
Local Kokoro & Parakeet in 1 Command Line — Fast ASR & TTS on Mac (MLX)
11
**ASR & TTS** model support are missing in popular local AI tools (e.g. Ollama, LMStudio) but they are very useful for on device usage too! We fixed that. We’ve made it dead simple to run **Parakeet** (ASR) and **Kokoro** (TTS) in **MLX** format on Mac — so you can easiy play with these 2 SOTA model directly on devic...
2025-08-12T16:21:45
https://www.reddit.com/r/LocalLLaMA/comments/1mod98h/local_kokoro_parakeet_in_1_command_line_fast_asr/
Invite_Nervous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mod98h
false
null
t3_1mod98h
/r/LocalLLaMA/comments/1mod98h/local_kokoro_parakeet_in_1_command_line_fast_asr/
false
false
self
11
{'enabled': False, 'images': [{'id': 'C1uRa9KjPXNK___BxsaejGE6qofqMhY-LCk10amPpBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C1uRa9KjPXNK___BxsaejGE6qofqMhY-LCk10amPpBI.png?width=108&crop=smart&auto=webp&s=1f3a673f6c0b997e0643679d9389f726b2bac976', 'width': 108}, {'height': 108, 'url': 'h...
Small Experiment using RX 9070 and Qwen3-8B on Existing Code Base
0
Noob here. Been intrigued with the capability of LLMs as of late, so I decided to jump into this rabbit hole. I want to figure out, how useful AI would be if it were to be assigned a simple task. Not sure if anyone has posted their experiment results here in detail, so I think I want to contribute one. I tried to see ...
2025-08-12T16:18:38
https://www.reddit.com/r/LocalLLaMA/comments/1mod63t/small_experiment_using_rx_9070_and_qwen38b_on/
alvin-nt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mod63t
false
null
t3_1mod63t
/r/LocalLLaMA/comments/1mod63t/small_experiment_using_rx_9070_and_qwen38b_on/
false
false
self
0
null
gpt-oss-20b performance
0
Running **gpt-oss-20b** (MXFP4) using LMStudio 0.3.22 on a 2014 Macbook Pro (Intel i5-4278U) with only 8GB of RAM, I have achieved an *amazing* 1.05 tokens per second. Of course the GPU is completely unusable since it's an Intel Iris Graphics 5100. Running the same prompt with **gpt-oss-20b** (Q4\_0) I got 0.68 tokens...
2025-08-12T16:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1mod44h/gptoss20b_performance/
Responsible-Pulse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mod44h
false
null
t3_1mod44h
/r/LocalLLaMA/comments/1mod44h/gptoss20b_performance/
false
false
self
0
null
We've started something we believe in, and we're excited to share it: Gonka!
1
[removed]
2025-08-12T16:16:34
https://discord.gg/wpmgjNNn
autoimago
discord.gg
1970-01-01T00:00:00
0
{}
1mod42n
false
null
t3_1mod42n
/r/LocalLLaMA/comments/1mod42n/weve_started_something_we_believe_in_and_were/
false
false
default
1
null
I have just launched Sheet0. An AI Data Agent designed for teams and individuals for accurate data sheets.
16
[Sheet0.com](http://Sheet0.com) borrows the "Level 4" concept from self-driving cars, lets you simply describe your goal, then it autonomously. The goal is simple: to give you a reliable, customizable, and scalable AI data agent that goes from natural language to trustworthy, ready-to-use data. You are welcome to try...
2025-08-12T16:16:05
https://i.redd.it/ylh1vzfe5mif1.png
LatterEstimate6714
i.redd.it
1970-01-01T00:00:00
0
{}
1mod3lu
false
null
t3_1mod3lu
/r/LocalLLaMA/comments/1mod3lu/i_have_just_launched_sheet0_an_ai_data_agent/
false
false
default
16
{'enabled': True, 'images': [{'id': 'ylh1vzfe5mif1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/ylh1vzfe5mif1.png?width=108&crop=smart&auto=webp&s=24df782732890a7f8d849bd13c7e4d136696ffba', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/ylh1vzfe5mif1.png?width=216&crop=smart&auto=web...
Apple released Embedding Atlas - LLM embedding visualizer
1
[removed]
2025-08-12T16:14:06
https://i.redd.it/677djmdd5mif1.jpeg
ArtZab
i.redd.it
1970-01-01T00:00:00
0
{}
1mod1py
false
null
t3_1mod1py
/r/LocalLLaMA/comments/1mod1py/apple_released_embedding_atlas_llm_embedding/
false
false
default
1
{'enabled': True, 'images': [{'id': '677djmdd5mif1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/677djmdd5mif1.jpeg?width=108&crop=smart&auto=webp&s=5bb7bed789c562730687e1fd86bf2c31907a947a', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/677djmdd5mif1.jpeg?width=216&crop=smart&auto=w...
Just got a 5090, what's my best option for running a coding model comparable in performance to Claude code? (I understand it won't be nearly as good, just hoping to get within 10% performance)
0
So im on a 9950x3d, 96gb of ddr5 and a shiny new rtx 5090. (I have a 4090 i could add in also which is an option if it would increase performance in any appreciable way) I've been using Claude code, and its been great. Im trying to run a quant that would get me within 10% performance of Claude, im probably wildly out...
2025-08-12T16:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1mod1dh/just_got_a_5090_whats_my_best_option_for_running/
definetlyrandom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mod1dh
false
null
t3_1mod1dh
/r/LocalLLaMA/comments/1mod1dh/just_got_a_5090_whats_my_best_option_for_running/
false
false
self
0
null
What is going on Ollama??
8
**Tested on:** Macbook Pro M4 Max - 128 GB RAM **Prompt:** Write a 200 word story **Inference speed**: |Ollama|LM Studio| |:-|:-| |38.30 tokens/s|65.48 tokens/s| I tested the same LLM on both platforms with context sizes of 4,096 and 40,000 tokens, as well as varying reasoning efforts. With these settings, the ...
2025-08-12T16:07:50
https://www.reddit.com/r/LocalLLaMA/comments/1mocvoh/what_is_going_on_ollama/
purealgo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mocvoh
false
null
t3_1mocvoh
/r/LocalLLaMA/comments/1mocvoh/what_is_going_on_ollama/
false
false
self
8
null
Neat embedding visualizer tool released by Apple
1
[removed]
2025-08-12T15:46:47
https://i.redd.it/wtpty40h0mif1.jpeg
ArtZab
i.redd.it
1970-01-01T00:00:00
0
{}
1mocba9
false
null
t3_1mocba9
/r/LocalLLaMA/comments/1mocba9/neat_embedding_visualizer_tool_released_by_apple/
false
false
default
1
{'enabled': True, 'images': [{'id': 'wtpty40h0mif1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/wtpty40h0mif1.jpeg?width=108&crop=smart&auto=webp&s=540c882a63065964dfef4533ba6045a44cc6aeec', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/wtpty40h0mif1.jpeg?width=216&crop=smart&auto=w...
Cost-effective home server suggestion
4
Hello, Finally I have the time and resources to follow my hobby; and I'm finally making my own home server! Since I have pretty much no old hardware to recycle, I want to buy something that's not too much trash tier. sadly where I live energy is quiet expensive so I would love something energy efficient. My target ...
2025-08-12T15:44:39
https://www.reddit.com/r/LocalLLaMA/comments/1moc94q/costeffective_home_server_suggestion/
Ereptile-Disruption
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moc94q
false
null
t3_1moc94q
/r/LocalLLaMA/comments/1moc94q/costeffective_home_server_suggestion/
false
false
self
4
null
Where's Prima.cpp??
5
Anyone know what happened to [https://github.com/Lizonghang/prima.cpp](https://github.com/Lizonghang/prima.cpp) it 404. This was the official repo for the prima.cpp paper right? [https://arxiv.org/abs/2504.08791](https://arxiv.org/abs/2504.08791)
2025-08-12T15:33:34
https://www.reddit.com/r/LocalLLaMA/comments/1mobyjw/wheres_primacpp/
Mysterious_While358
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mobyjw
false
null
t3_1mobyjw
/r/LocalLLaMA/comments/1mobyjw/wheres_primacpp/
false
false
self
5
null
Building vllm from source for OSS support on Ampere + benchmarks
7
ampere support for oss was just merged into vllm, figured I would share since everyone seems to be struggling with vllm lately. my setup: Ubuntu 24.04 server, 3090s, Nvidia 575 driver, Cuda 12.4 (should be 12.8 or 12.9 with 575, my cuda is mismatched) mkdir vllm-src cd vllm-src python3.12 -m venv myenv s...
2025-08-12T15:29:30
https://www.reddit.com/r/LocalLLaMA/comments/1mobuo7/building_vllm_from_source_for_oss_support_on/
Conscious_Cut_6144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mobuo7
false
null
t3_1mobuo7
/r/LocalLLaMA/comments/1mobuo7/building_vllm_from_source_for_oss_support_on/
false
false
self
7
null
Sandboxed Code Execution with GPU Support
9
We built a secure sandbox environment to run arbitrary code on GPUs. **The Problem**: if you’re building an agent, you need a way to run code: this includes setting resource limits, having file system access, and running arbitrary commands. And it’s important to do this securely. You don’t want to unknowingly run mali...
2025-08-12T15:24:55
https://www.reddit.com/r/LocalLLaMA/comments/1mobqcu/sandboxed_code_execution_with_gpu_support/
velobro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mobqcu
false
null
t3_1mobqcu
/r/LocalLLaMA/comments/1mobqcu/sandboxed_code_execution_with_gpu_support/
false
false
self
9
{'enabled': False, 'images': [{'id': 'TsfPuyF9xFQ2q_5KdhI8njSOC8wpdepaBNlHKv-l78U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TsfPuyF9xFQ2q_5KdhI8njSOC8wpdepaBNlHKv-l78U.png?width=108&crop=smart&auto=webp&s=afbada32422d9162f8f92e0d9cdd691436e31546', 'width': 108}, {'height': 108, 'url': 'h...
Cua-Agent v0.4.11 now supports the new GLM-4.5V GUI agent model, and compositional Computer-Use agents using the GTA1 UI grounding model + any liteLLM/local VLM (with consistent output!)
7
2025-08-12T15:14:01
https://v.redd.it/etvpyi4gtlif1
a6oo
v.redd.it
1970-01-01T00:00:00
0
{}
1mobfxg
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/etvpyi4gtlif1/DASHPlaylist.mpd?a=1757603658%2CNDYwODA2MGE0YjRhY2I1YjkxOGNjYWNiZmYyMWZiOWEwYTM1YzdhMWM1MjM1MDY4NWFlYjMwYWEwMmNhOWRmYQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/etvpyi4gtlif1/DASH_720.mp4?source=fallback', 'ha...
t3_1mobfxg
/r/LocalLLaMA/comments/1mobfxg/cuaagent_v0411_now_supports_the_new_glm45v_gui/
false
false
https://external-preview…e71d7d43016ad41c
7
{'enabled': False, 'images': [{'id': 'cHl1cXBwc251bGlmMeBAVCrFk63xi9bA2wzdNPc_yUbmN7B6WenUvpRT7Ueb', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/cHl1cXBwc251bGlmMeBAVCrFk63xi9bA2wzdNPc_yUbmN7B6WenUvpRT7Ueb.png?width=108&crop=smart&format=pjpg&auto=webp&s=ef0b6a4351b5cc7f990152f020cad8e46d1eb...
RTX ada vs RTX PRO Blackwell
0
Hey guys! I'm new to local set ups and I was doing some research through ChatGPT and it was saying that Blackwell models are still new and not as reliable as ada ones. Is it true? I really don't want to pay for ada 1.5x the amount of money I would pay for Blackwell for the same VRAM.
2025-08-12T15:05:32
https://www.reddit.com/r/LocalLLaMA/comments/1mob811/rtx_ada_vs_rtx_pro_blackwell/
Jalbertus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mob811
false
null
t3_1mob811
/r/LocalLLaMA/comments/1mob811/rtx_ada_vs_rtx_pro_blackwell/
false
false
self
0
null
Cognee Embedding Model Choice
1
First of all, can anyone give a review of how well cognee performs with llama.cpp? Cognee (OSS) acts as a memory layer and possibly also a RAG tool for llms. Link: https://github.com/topoteretes/cognee Other Questions relating to embedding and installation: What we want your advice on: 1. llama.cpp and Cognee integ...
2025-08-12T14:52:14
https://www.reddit.com/r/LocalLLaMA/comments/1moavay/cognee_embedding_model_choice/
Infamous_Jaguar_2151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moavay
false
null
t3_1moavay
/r/LocalLLaMA/comments/1moavay/cognee_embedding_model_choice/
false
false
self
1
{'enabled': False, 'images': [{'id': 'UIfRRgdBHdYc2kWUJyryMmJQiUB0JAMTOp6DVSAFHgs', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/UIfRRgdBHdYc2kWUJyryMmJQiUB0JAMTOp6DVSAFHgs.png?width=108&crop=smart&auto=webp&s=5f5afffe7b323118cacc64ded3a6ff164d2f63d3', 'width': 108}, {'height': 153, 'url': 'h...
Distributed Llama supports small Qwen3 models (0.14.0)
7
2025-08-12T14:44:05
https://github.com/b4rtaz/distributed-llama/releases/tag/v0.14.0
thisislewekonto
github.com
1970-01-01T00:00:00
0
{}
1moanma
false
null
t3_1moanma
/r/LocalLLaMA/comments/1moanma/distributed_llama_supports_small_qwen3_models_0140/
false
false
default
7
{'enabled': False, 'images': [{'id': 'o_YzyMV3JJmaC2ZksFkog106Zxt11T4WUbVe3Fdfamk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o_YzyMV3JJmaC2ZksFkog106Zxt11T4WUbVe3Fdfamk.png?width=108&crop=smart&auto=webp&s=762aab503b4cb734590694726d64765672346360', 'width': 108}, {'height': 108, 'url': 'h...
Distributed Llama supports Qwen3 models
1
2025-08-12T14:42:35
https://github.com/b4rtaz/distributed-llama/releases/tag/v0.14.0
b4rtaz
github.com
1970-01-01T00:00:00
0
{}
1moam8q
false
null
t3_1moam8q
/r/LocalLLaMA/comments/1moam8q/distributed_llama_supports_qwen3_models/
false
false
default
1
null
We tested Qwen3-Coder, GPT-5 and other 30+ models on new SWE-Bench like tasks from July 2025
444
Hi all, I’m Ibragim from Nebius. We ran a benchmark on **34 fresh GitHub PR tasks** from July 2025 using the [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard). These are real, recent problems — no training-set contamination — and include both proprietary and open-source models. **Quick takeaways:** * **...
2025-08-12T14:41:09
https://i.redd.it/lcee3fueolif1.png
Fabulous_Pollution10
i.redd.it
1970-01-01T00:00:00
0
{}
1moakv3
false
null
t3_1moakv3
/r/LocalLLaMA/comments/1moakv3/we_tested_qwen3coder_gpt5_and_other_30_models_on/
false
false
default
444
{'enabled': True, 'images': [{'id': 'lcee3fueolif1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/lcee3fueolif1.png?width=108&crop=smart&auto=webp&s=d9a136fc26a366850ccee19c9857e68e5701d665', 'width': 108}, {'height': 105, 'url': 'https://preview.redd.it/lcee3fueolif1.png?width=216&crop=smart&auto=web...
Qwen3 30B A3B thinking doesn’t work with Open WebUi?
0
Hey all, I downloaded the new (well new to me since I was using the first version of A3B) Qwen3 30B A3B to use as my “general purpose” AI assistant with Open WebUi. I loaded it up (unsloth UD Q6_K_XL GGUF) in kobaldcpp and connected it to open WebUi. Now when generating anything it doesn’t use thinking blocks and jus...
2025-08-12T14:33:51
https://www.reddit.com/r/LocalLLaMA/comments/1moae28/qwen3_30b_a3b_thinking_doesnt_work_with_open_webui/
Any_Meringue_7765
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moae28
false
null
t3_1moae28
/r/LocalLLaMA/comments/1moae28/qwen3_30b_a3b_thinking_doesnt_work_with_open_webui/
false
false
self
0
null
Llama.cpp on android
5
Hi folks, I have been succesfully compiled and run llama c++ at my android and run uncensored llm locally The most wild thing, that you actually can build llama.cpp from source directly at android and run it from there, so now I can use it to ask any questions and my history will never leave a device In example I hav...
2025-08-12T14:31:01
https://www.reddit.com/gallery/1moabey
0xBekket
reddit.com
1970-01-01T00:00:00
0
{}
1moabey
false
null
t3_1moabey
/r/LocalLLaMA/comments/1moabey/llamacpp_on_android/
false
false
https://b.thumbs.redditm…TZKTd8yKUMGs.jpg
5
null
VulkanIlm, Run Modern LLMs on Old GPUs via Vulkan (33× Faster on Dell iGPU, 4× on RX 580)
33
Hey folks, I’ve been building **VulkanIlm** — a Python wrapper for llama.cpp that uses Vulkan for GPU acceleration. The goal: make local LLMs faster on *any* GPU, even older AMD and integrated ones, with **no CUDA dependency**. Some early benchmarks: **Dell E7250 (i7-5600U, Intel iGPU)** Model: TinyLLaMA-1.1B-Chat...
2025-08-12T14:24:59
https://www.reddit.com/r/LocalLLaMA/comments/1moa5o0/vulkanilm_run_modern_llms_on_old_gpus_via_vulkan/
Proper_Dig_6618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moa5o0
false
null
t3_1moa5o0
/r/LocalLLaMA/comments/1moa5o0/vulkanilm_run_modern_llms_on_old_gpus_via_vulkan/
false
false
self
33
{'enabled': False, 'images': [{'id': '5sinMVf7BxMEpk7PIgS8iPnPiVtvyOT0WBzvBJPPkkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5sinMVf7BxMEpk7PIgS8iPnPiVtvyOT0WBzvBJPPkkM.png?width=108&crop=smart&auto=webp&s=c37009417d477471e67e3e03c67c02ca2bc99d24', 'width': 108}, {'height': 108, 'url': 'h...
Cheap coding assistant
11
Some say there's a paradigm shift happening in software - in coding specifically, where AI is taking the reigns. Other say it's hype, and actually everything takes longer. I have found the truth to be somewhere in between at work with tools like Cline and Claude Code (at least on work projects - they pay.) But what I'...
2025-08-12T14:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1moa5as/cheap_coding_assistant/
Evisteron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1moa5as
false
null
t3_1moa5as
/r/LocalLLaMA/comments/1moa5as/cheap_coding_assistant/
false
false
self
11
null
Sampling Methods (Beyond Top-P Top-k)
1
Does anyone like to use sampling methods beyond the usual Top-P Top-k Temperature etc? What do you like
2025-08-12T14:14:48
https://www.reddit.com/r/LocalLLaMA/comments/1mo9w9e/sampling_methods_beyond_topp_topk/
No_Efficiency_1144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo9w9e
false
null
t3_1mo9w9e
/r/LocalLLaMA/comments/1mo9w9e/sampling_methods_beyond_topp_topk/
false
false
self
1
null
Any local inference libraries in golang
0
Has ayone tried any packages or modules that is able to bake and serve any models along with a golang backend, compile it and use it in production ? Looking for something similar to this
2025-08-12T14:14:44
https://www.knightsanalytics.com/post/hugot-llms-in-go
mikasa_6969
knightsanalytics.com
1970-01-01T00:00:00
0
{}
1mo9w7g
false
null
t3_1mo9w7g
/r/LocalLLaMA/comments/1mo9w7g/any_local_inference_libraries_in_golang/
false
false
default
0
null
Microsoft releases Prompt Orchestration Markup Language
55
Hello. Just came across Microsoft’s POML (Prompt Orchestration Markup Language) and it seems like a useful tool to have. From GitHub page (https://github.com/microsoft/poml): POML (Prompt Orchestration Markup Language) is a novel markup language designed to bring structure, maintainability, and versatility to advanc...
2025-08-12T14:14:04
https://www.reddit.com/r/LocalLLaMA/comments/1mo9vkh/microsoft_releases_prompt_orchestration_markup/
ArtZab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo9vkh
false
null
t3_1mo9vkh
/r/LocalLLaMA/comments/1mo9vkh/microsoft_releases_prompt_orchestration_markup/
false
false
self
55
{'enabled': False, 'images': [{'id': 'jkWVPEwLBE-3HoNZu6f3-fNS7wtX4pR1wCxBEgY6qGc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jkWVPEwLBE-3HoNZu6f3-fNS7wtX4pR1wCxBEgY6qGc.png?width=108&crop=smart&auto=webp&s=fd57784d3f6e70126af2f58b6d0169e8b05b26b6', 'width': 108}, {'height': 108, 'url': 'h...
Single RTX 3090 build help needed
1
Hi fellow LLMers! Currently I run LLMs up to 12B parameters at Q4 on a Mac mini with 16GB memory and an M2 Pro chip. I often run into situations where the memory is unfortunately just not enough. The model and context barely fit. I tried 8B and even smaller models via LMStudio and Ollama, however their general knowled...
2025-08-12T13:58:45
https://www.reddit.com/r/LocalLLaMA/comments/1mo9hlt/single_rtx_3090_build_help_needed/
AssOverflow12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo9hlt
false
null
t3_1mo9hlt
/r/LocalLLaMA/comments/1mo9hlt/single_rtx_3090_build_help_needed/
false
false
self
1
null
Super Helpful Wow....
0
What Are they feeding to chat gpt 5?
2025-08-12T13:52:21
https://www.reddit.com/gallery/1mo9bz0
CryOrganic8886
reddit.com
1970-01-01T00:00:00
0
{}
1mo9bz0
false
null
t3_1mo9bz0
/r/LocalLLaMA/comments/1mo9bz0/super_helpful_wow/
false
false
https://a.thumbs.redditm…Yf0ADGEIeGa8.jpg
0
null