title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Latest Open-Source AMD Improvements Allowing For Better Llama.cpp AI Performance Against Windows 11
30
Hey everyone! I was checking out the recent llama.cpp benchmarks and the data in this link shows that llama.cpp runs significantly faster on Windows 11 (25H2) than on Ubuntu for AMD GPUs.
2025-09-18T15:14:48
https://www.phoronix.com/review/llama-cpp-windows-linux/3
NewtMurky
phoronix.com
1970-01-01T00:00:00
0
{}
1nkb0yq
false
null
t3_1nkb0yq
/r/LocalLLaMA/comments/1nkb0yq/latest_opensource_amd_improvements_allowing_for/
false
false
default
30
null
More Vram vs a second machine. Opinions wanted from other addicts.
7
Hey fellow hardware addicts that I know are out there. I'm addicted to GLM 4.5 and have a machine with 88 gig vram currently (b670 carbon wife, 9950x cpu 2x5090, 1 old 4090 i may sell, 192 gig ram.) Basicially I'd like opinions on a few options I have with regards to what others might do. I would like to run GLM 4.5 but the only tolerable t/s Im getting is about 9.5 using llama.cpp on unsloth GLM\_XL 2. Q 3/4 tun at like 6/5 which while I can run not really fun to sit and wait 3 minutes per post. So I'm thinking since I have a second machine sat idle which iwas just going to game on 7950x/ \*take various parts out of the workstation, ie one of the 5090s. And just run glm air on 1 5090 + the cpu. And it would only slow down to about 6.5 tokens a sec. Or if i could be less a snob i could run glm air fully in Vram/ just have one machine with the 2 5090/ can add a third gpu via a riser. (like the 4090 currently) but its runs at pci 4 x4. 5090 runs pci 5, x8 5090 runs pci 4 x8 4090 runs pci 4 x4 I do have to power limit the cards a little to be safe (2000w psu lol) but adding cards to a model that needs to offload to cpu barely adds 1-1.5 tokens a sec to say GLM 4.5., which doesnt make finanacial sense to keep the 4090 then lol and i could just take parts from this workstation and build that second pc for 5090 + cpu. Outside the financial stupidity which ive already done so don't need those comments please if anyone has thoughts, would you keep all the gpus on 1 machine so have 88 gig vram (or sell the 4090 eventually) or would you move a 5090 to the second machine and use RPC for models that can fit in vram. (I've done extensive testing on that, long as model fits entirely in vram adding a gpu over the network does make it faster, doesnt with cpu offloading.) Is vram still the king? Or would the advantage of having 2 machines with a 5090 in may be better in long run. Or could I ever learn to be happy with GLM air, and then generate like 50 tokens a sec with this setup lol. Any opinions or questions would be interesting to think about.
2025-09-18T15:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1nkax9u/more_vram_vs_a_second_machine_opinions_wanted/
fluffywuffie90210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nkax9u
false
null
t3_1nkax9u
/r/LocalLLaMA/comments/1nkax9u/more_vram_vs_a_second_machine_opinions_wanted/
false
false
self
7
null
NVIDIA in Intel SOCs
0
More details about the deal from Tom’s.
2025-09-18T15:07:22
https://www.tomshardware.com/pc-components/cpus/nvidia-and-intel-announce-jointly-developed-intel-x86-rtx-socs-for-pcs-with-nvidia-graphics-also-custom-nvidia-data-center-x86-processors-nvidia-buys-usd5-billion-in-intel-stock-in-seismic-deal
eleqtriq
tomshardware.com
1970-01-01T00:00:00
0
{}
1nkatu1
false
null
t3_1nkatu1
/r/LocalLLaMA/comments/1nkatu1/nvidia_in_intel_socs/
false
false
default
0
null
Am I the first one to run a full multi-agent workflow on an edge device?
23
**Discussion** I’ve been messing with Jetson boards for quiet a while, but this was my first time trying to push a real multi-agent stack onto one. Instead of cloud or desktop, I wanted to see if I could get a Multi Agent AI Workflow to run end-to-end on a Jetson Orin Nano 8GB. The goal: talk to the device, have it generate a PowerPoint, all locally. **Setup** • Jetson Orin Nano 8GB • CAMEL-AI framework for agent orchestration • Whisper for STT • CAMEL PPTXToolkit for slide generation • Models tested: Mistral 7B Q4, Llama 3.1 8B Q4, Qwen 2.5 7B Q4 **What actually happened** • Whisper crushed it. 95%+ accuracy even with noise. • CAMEL’s agent split made sense. One agent handled chat, another handled slide creation. Felt natural, no duct tape. • Jetson held up way better than I expected. 7B inference + Whisper at the same time on 8GB is wild. • The slides? Actually useful, not just generic bullets. **What broke my flow (Learnings for future too.)** • TTS was slooow. 15–25s per reply • Totally ruins the convo feel. • Mistral kept breaking function calls with bad JSON. • Llama 3.1 was too chunky for 8GB, constant OOM. • Qwen 2.5 7B ended up being the sweet spot. **Takeaways** 1. Model fit > model hype. 1. TTS on edge is the real bottleneck. 1. 8GB is just enough, but you’re cutting it close. 1. Edge optimization is very different from cloud. So yeah, it worked. Multi-agent on edge is possible. **Full pipeline** Whisper → CAMEL agents → PPTXToolkit → TTS. Curious if anyone else here has tried running Agentic Workflows or any other multi-agent frameworks on edge hardware? Or am I actually the first to get this running?​​​​​​​​​​​​​​​​
2025-09-18T14:38:09
https://v.redd.it/af1mqlrenxpf1
Abit_Anonymous
/r/LocalLLaMA/comments/1nka2cl/am_i_the_first_one_to_run_a_full_multiagent/
1970-01-01T00:00:00
0
{}
1nka2cl
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/af1mqlrenxpf1/DASHPlaylist.mpd?a=1760927894%2CMzFkMTA2N2M4NThiYzRlOWE3YWJjYzMzZjRjYjE2ZDQ5ZDJhYTczNGFlYzFiNzMxOGI4OWVkY2Q2MTk4NmQ2OQ%3D%3D&v=1&f=sd', 'duration': 260, 'fallback_url': 'https://v.redd.it/af1mqlrenxpf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/af1mqlrenxpf1/HLSPlaylist.m3u8?a=1760927894%2CNGI0MTEyZmU2M2U3ZWU5NzQ0YmZmNDM5ZGFmNjIyMzJjNzNkOGY0NDdiMTQ5MmY2NGEyODM3YTAwZTIzNTM0Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/af1mqlrenxpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1nka2cl
/r/LocalLLaMA/comments/1nka2cl/am_i_the_first_one_to_run_a_full_multiagent/
false
false
https://external-preview…f52e8c2028305e4f
23
{'enabled': False, 'images': [{'id': 'cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?width=108&crop=smart&format=pjpg&auto=webp&s=04bc4fa0dc15303915db1790cf3bac9af2b5f805', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?width=216&crop=smart&format=pjpg&auto=webp&s=50fd19a214394d599aa834b11c947de0d83ae859', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?width=320&crop=smart&format=pjpg&auto=webp&s=facd35827e8bf839cbf9e2f3f955279d42b22592', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?width=640&crop=smart&format=pjpg&auto=webp&s=b00d0da9c77a26bc2d6d8c2c8e015c8f2af2612f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?width=960&crop=smart&format=pjpg&auto=webp&s=1c4958c6325e56cc1f92707f13489f1a7405255a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f0e494af955ce945cf7c991dffa6ceea3237aaad', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cnpxNWU2MmRueHBmMYp0QtsqZN4801amkhsrr7Zpcr6-upwl98kHrYEp8oGI.png?format=pjpg&auto=webp&s=03e89e65a06f6c9445e76bd598c555d28a0d0880', 'width': 1920}, 'variants': {}}]}
Anyone ever feel discouraged? Like giving up?
1
I've been running LLMs locally for about a year. It started with a cheap mining motherboard, some cheap 3060s, and a dream. Every couple months I've upgraded my setup However, what I've learned, is that pushing this type of hardware to the limits has so many issues I would have never expected. Things like GPU lane allocation issues, incredibly vague BIOS issues, OS instability, issues with risers, etc. The problems never end I was reading CPU lane allocation diagrams on an obscure supermicro manual and trying to figure out which lane is controlled by which CPU and how bifurcation is handled when it hit me. Why am I doing this? It's just a neverending cascade of problems and I'm tired boss. I'm at the point that I kinda just want to sell all my shit, upgrade my gaming PC to a 5090, and call it a day. Anyone gone through this or relate?
2025-09-18T14:26:01
https://www.reddit.com/r/LocalLLaMA/comments/1nk9r36/anyone_ever_feel_discouraged_like_giving_up/
kevin_1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk9r36
false
null
t3_1nk9r36
/r/LocalLLaMA/comments/1nk9r36/anyone_ever_feel_discouraged_like_giving_up/
false
false
self
1
null
The quality of AI-assisted software depends on unit of work management
2
2025-09-18T14:10:59
https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/
ssrihari
blog.nilenso.com
1970-01-01T00:00:00
0
{}
1nk9d6w
false
null
t3_1nk9d6w
/r/LocalLLaMA/comments/1nk9d6w/the_quality_of_aiassisted_software_depends_on/
false
false
default
2
null
Qwen3-next-80b-a3b hits 1400 elo (also longcat-flash)
42
https://preview.redd.it/…-world use?
2025-09-18T13:24:45
https://www.reddit.com/r/LocalLLaMA/comments/1nk87rk/qwen3next80ba3b_hits_1400_elo_also_longcatflash/
GabryIta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk87rk
false
null
t3_1nk87rk
/r/LocalLLaMA/comments/1nk87rk/qwen3next80ba3b_hits_1400_elo_also_longcatflash/
false
false
https://a.thumbs.redditm…4LwIqXOVqdq4.jpg
42
null
Never pay for Cursor. Cursor refused to switch my annual Pro plan (purchased in April 2025) back to the old pricing because the so-called two-month “grace period” has already passed.
0
I purchased the annual Pro plan in April 2025, but then Cursor unilaterally modified the contract without my consent and switched my plan to the new token-based pricing. The new pricing is an absolute disaster. I emailed hi@cursor to request a switch back to the old plan I had agreed to in April, which included 500 requests, but they rejected it, saying the two-month “grace period” was already over. I don’t understand. I’m simply asking Cursor to honor what I originally purchased—why should there even be some arbitrary “two-month window” for that? Should I consider suing them or taking this to the media? Warning: Cursor can change what you bought without your permission and refuse to give it back. Unless this changes, absolutely do not purchase any annual plan, and don’t trust a single word they say.
2025-09-18T13:22:04
https://i.redd.it/c2xgq3oecxpf1.jpeg
ReckCindy
i.redd.it
1970-01-01T00:00:00
0
{}
1nk85fh
false
null
t3_1nk85fh
/r/LocalLLaMA/comments/1nk85fh/never_pay_for_cursor_cursor_refused_to_switch_my/
false
false
https://b.thumbs.redditm…jpNcroWp5ddw.jpg
0
{'enabled': True, 'images': [{'id': 'eShFO7z-fo8YUdxHS2DhcAMyGRSLefjV05sUsBt7uTs', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?width=108&crop=smart&auto=webp&s=518f247eddba0a143314950269f6f2ee0084b8cd', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?width=216&crop=smart&auto=webp&s=0ef4d73007e6cc36d7ef6b94249f3989c7a91f84', 'width': 216}, {'height': 420, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?width=320&crop=smart&auto=webp&s=5da65b57bbf14b9ed54240f54471ffbcfe38df62', 'width': 320}, {'height': 841, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?width=640&crop=smart&auto=webp&s=a5a2734b67d38e95812ab85f1726e3971390b64c', 'width': 640}, {'height': 1262, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?width=960&crop=smart&auto=webp&s=348fb0f16da336bcae3f99afce1d4c9f4025776b', 'width': 960}, {'height': 1420, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?width=1080&crop=smart&auto=webp&s=9114e5529fe457b7a4588e662366df758eb51645', 'width': 1080}], 'source': {'height': 1697, 'url': 'https://preview.redd.it/c2xgq3oecxpf1.jpeg?auto=webp&s=b368322fbc36ced78c0293c3dd157b75fed86eb0', 'width': 1290}, 'variants': {}}]}
Should I switch from paying $220/mo for AI to running local LLMs on an M3 Studio?
0
Right now I’m paying $200/mo for Claude and $20/mo for ChatGPT, so about $220 every month. I’m starting to think maybe I should just buy hardware once and run the best open-source LLMs locally instead. I’m looking at getting an M3 Studio (512GB). I already have an M4 (128GB RAM + 4 SSDs), and I’ve got a friend at Apple who can get me a 25% discount. Do you think it’s worth switching to a local setup? Which open-source models would you recommend for: • General reasoning / writing • Coding • Vision / multimodal tasks Would love to hear from anyone who’s already gone this route. Is the performance good enough to replace Claude/ChatGPT for everyday use, or do you still end up needing Max plan.
2025-09-18T13:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1nk7qtk/should_i_switch_from_paying_220mo_for_ai_to/
Ill_Occasion_1537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk7qtk
false
null
t3_1nk7qtk
/r/LocalLLaMA/comments/1nk7qtk/should_i_switch_from_paying_220mo_for_ai_to/
false
false
self
0
null
NVIDIA invests 5 billions $ into Intel
601
Bizarre news, so NVIDIA is like 99% of the market now?
2025-09-18T12:56:04
https://www.cnbc.com/2025/09/18/intel-nvidia-investment.html
Kooky-Somewhere-2883
cnbc.com
1970-01-01T00:00:00
0
{}
1nk7jbi
false
null
t3_1nk7jbi
/r/LocalLLaMA/comments/1nk7jbi/nvidia_invests_5_billions_into_intel/
false
false
https://external-preview…216addb391d3c02b
601
{'enabled': False, 'images': [{'id': 'n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?width=108&crop=smart&auto=webp&s=f03ad9234c8e273a7bc4a297c3afdf96568f50ec', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?width=216&crop=smart&auto=webp&s=4db9a7c85d74a530bc0b0e3c42994b3d2d07603e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?width=320&crop=smart&auto=webp&s=426189b6e74d3d77eb9149148da3702e8fc6faee', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?width=640&crop=smart&auto=webp&s=5ddf7ad07021e40c3004f38a19f49697e1cd4cc6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?width=960&crop=smart&auto=webp&s=e2aae8481964317f34ba995f3e1dd5624036c4eb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?width=1080&crop=smart&auto=webp&s=28fa030d2ec6861915bed032ec68abbfbf99cb8d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/n5kP5NRletQy7r254iQxj6sHk25NybFeHBeqLvZxjz8.jpeg?auto=webp&s=b8d4ae7830b3fcd188e80716fbe432445bccf40e', 'width': 1920}, 'variants': {}}]}
The future of AI = specialized small models working together?
5
I’ve noticed a trend: instead of one giant “do-it-all” LLM, we’re getting lots of smaller, specialized models that shine in specific areas. • Text/Reasoning: Mistral 7B, LLaMA 3 8B • Coding: StarCoder2, CodeLLaMA • Vision: LLaVA, Fuyu-8B • Multimodal/Audio: Whisper, PaliGemma Feels like open source is heading toward “many small experts” rather than one giant model. The interesting part is how to make them work together frameworks like CrewAI make it possible to orchestrate them as a team. Curious what others think: is the future massive generalist LLMs, or smaller specialists collaborating?
2025-09-18T12:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1nk75zk/the_future_of_ai_specialized_small_models_working/
Ill_Occasion_1537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk75zk
false
null
t3_1nk75zk
/r/LocalLLaMA/comments/1nk75zk/the_future_of_ai_specialized_small_models_working/
false
false
self
5
null
Qwen Next is my new go to model
175
It is blazing fast, made 25 back to back tool calls with no errors, both as mxfp4 and qx86hi quants. I had been unable to test until now, and previously OSS-120B had become my main model due to speed/tool calling efficiency. Qwen delivered! Have not tested coding, or RP (I am not interested in RP, my use is as a true assistant, running tasks). what are the issues that people have found? i prefer it to Qwen 235 which I can run at 6 bits atm.
2025-09-18T12:32:30
https://www.reddit.com/r/LocalLLaMA/comments/1nk706v/qwen_next_is_my_new_go_to_model/
Miserable-Dare5090
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk706v
false
null
t3_1nk706v
/r/LocalLLaMA/comments/1nk706v/qwen_next_is_my_new_go_to_model/
false
false
self
175
null
Translating EPUB Ebooks with Local LLM and for FREE. An Effective Approach
4
2025-09-18T11:43:17
https://grigio.org/translating-epub-ebooks-with-local-llm-and-for-free-an-effective-approach/
gnorrisan
grigio.org
1970-01-01T00:00:00
0
{}
1nk5yzu
false
null
t3_1nk5yzu
/r/LocalLLaMA/comments/1nk5yzu/translating_epub_ebooks_with_local_llm_and_for/
false
false
https://external-preview…06f5509c9ba7126a
4
{'enabled': False, 'images': [{'id': 'FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?width=108&crop=smart&auto=webp&s=b3df7ae83b82bc9e944983ae69d342acbbe0afcf', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?width=216&crop=smart&auto=webp&s=498bc1a70c729fc77607f14204776a68bf053676', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?width=320&crop=smart&auto=webp&s=40bf854b738fb028bda17d415300eff0eb37117a', 'width': 320}, {'height': 369, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?width=640&crop=smart&auto=webp&s=c7f77e9a2744804764399d231a10c80b19c32a91', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?width=960&crop=smart&auto=webp&s=14e0cf6ffcd1221eb4d884ed3435d1df6d93f52d', 'width': 960}, {'height': 622, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?width=1080&crop=smart&auto=webp&s=892bf9e33ef04ece76743e0f1477ebe8ff7125a3', 'width': 1080}], 'source': {'height': 837, 'url': 'https://external-preview.redd.it/FGiCVyrG-E2tYdInHZJnXgGtG_4b3XrtjZsYZZR2Qd0.png?auto=webp&s=6351aad1475a5ab428fb7e8274aafb05630c3436', 'width': 1451}, 'variants': {}}]}
llama.cpp rocm 7 official from AMD vs vulkan vs cpu
10
Did you try llama.cpp from AMD ? Did you see improvements over vulkan in tk/s ? https://rocm.docs.amd.com/en/latest/compatibility/ml-compatibility/llama-cpp-compatibility.html
2025-09-18T11:40:14
https://www.reddit.com/r/LocalLLaMA/comments/1nk5wtq/llamacpp_rocm_7_official_from_amd_vs_vulkan_vs_cpu/
gnorrisan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk5wtq
false
null
t3_1nk5wtq
/r/LocalLLaMA/comments/1nk5wtq/llamacpp_rocm_7_official_from_amd_vs_vulkan_vs_cpu/
false
false
self
10
null
I want help to make a AI which two or more people can use
0
i want to create a AI for my organization just like chatbot or proper ai in which one or more people can use it and it should run on data which we upload on it . so is there any easier way or tool which exist already please suggest .
2025-09-18T11:34:01
https://www.reddit.com/r/LocalLLaMA/comments/1nk5slj/i_want_help_to_make_a_ai_which_two_or_more_people/
Leading-Sugar6593
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk5slj
false
null
t3_1nk5slj
/r/LocalLLaMA/comments/1nk5slj/i_want_help_to_make_a_ai_which_two_or_more_people/
false
false
self
0
null
Who runs large models on a raspberry pi?
0
Hey! I know the speed will be abysmal, but that doesn't matter for me. Has anyone tried running larger models like 32B, 70B (or even larger) on a pi letting it use the swap file and can share speed results? What are the tokens/sec for inference and generation? Please don't answer if you just want to tell me that it's "not usable" or "too slow", that's very subjective, isn't it? Thanks in advance for anyone who's able to give insight :)
2025-09-18T11:27:01
https://www.reddit.com/r/LocalLLaMA/comments/1nk5nps/who_runs_large_models_on_a_raspberry_pi/
honuvo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk5nps
false
null
t3_1nk5nps
/r/LocalLLaMA/comments/1nk5nps/who_runs_large_models_on_a_raspberry_pi/
false
false
self
0
null
bghira/universal-metal-flash-attention: A universal adapter including zero-copy Python bindings for Philip Turner's metal flash attention library.
2
2025-09-18T11:20:23
https://github.com/bghira/universal-metal-flash-attention
bghira
github.com
1970-01-01T00:00:00
0
{}
1nk5jaw
false
null
t3_1nk5jaw
/r/LocalLLaMA/comments/1nk5jaw/bghirauniversalmetalflashattention_a_universal/
false
false
https://external-preview…0a07d96a858bdb6e
2
{'enabled': False, 'images': [{'id': 'MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=108&crop=smart&auto=webp&s=eb5129df7e778610c4f49ff5238e2b9343e73172', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=216&crop=smart&auto=webp&s=dbee6d0f468c850b287eb73ee67624f1452b30ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=320&crop=smart&auto=webp&s=5ce1e7ed5dfaf7087c330fbeb908e822bd43da63', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=640&crop=smart&auto=webp&s=260f8b25d2ec1dd28a790ac4eaf97269864656f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=960&crop=smart&auto=webp&s=6eb3863ea3e530985e61c78800fd088c2bf49c11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=1080&crop=smart&auto=webp&s=54e00d8315a2b0f81c37c863fd6300ed943af984', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?auto=webp&s=b291ddea9d6851bcb90d6ca86835145a7cdb5d97', 'width': 1200}, 'variants': {}}]}
LLaMA AI SEO Pilot Tracking Brand Mentions
0
Exploring local LLaMA responses for brand citations - a hands-on experiment. Last week, I shared an idea about testing how AI platforms ChatGPT, Claude, Perplexity cite brands in their answers. The response was incredible - founders, marketers, and AI enthusiasts reached out with interest. \*\*Pilot Overview\*\* 1. Select 5 SaaS or tech companies (CRM, email, project management, analytics, etc.) 2. Run 20 user-style queries across ChatGPT, Claude, Perplexity 3. Track which platforms cite which companies 4. Rewrite company pages into AI-friendly formats (structured FAQs, schema tables, clear product breakdowns) 5. Re-run queries & measure shifts \*\*Goal\*\*: See if structured content can increase AI mentions by 25%. If you're a founder, marketer, or SEO lead interested in joining this early pilot, please fill out your details here: [https://forms.gle/CKkP75mJC1iDSAd9A](https://forms.gle/CKkP75mJC1iDSAd9A) I'll share results openly with the community once we have the first wave of data. Let's build the AI SEO playbook together.
2025-09-18T11:18:59
https://www.reddit.com/r/LocalLLaMA/comments/1nk5ids/llama_ai_seo_pilot_tracking_brand_mentions/
No-Abbreviations7266
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk5ids
false
null
t3_1nk5ids
/r/LocalLLaMA/comments/1nk5ids/llama_ai_seo_pilot_tracking_brand_mentions/
false
false
self
0
{'enabled': False, 'images': [{'id': 'TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?width=108&crop=smart&auto=webp&s=dc570f31df0ed85c587a398230a4524812e6c341', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?width=216&crop=smart&auto=webp&s=d7e8b521139990dca698543b87200be3a613154f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?width=320&crop=smart&auto=webp&s=62c369277eb576381a5ab639a78e1c7722d63165', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?width=640&crop=smart&auto=webp&s=e4072304b034b5a18f77b189d539a9820e732761', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?width=960&crop=smart&auto=webp&s=1629cf8343e2851bd31851a5cb3fb1fbf54b6918', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?width=1080&crop=smart&auto=webp&s=988405190f95c0e5702ef809f52e7e10583b4069', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/TZcoz--XsBlPkUppV9jqLyvedwx-UJheU0YDM0vMnpc.png?auto=webp&s=b4dd20183f8476ee6eef893ceece06beb7b948f3', 'width': 1200}, 'variants': {}}]}
Ryzen 6800H iGPU 680M Vulkan benchmarks llama.cpp
52
I continue to be impressed on how well iGPU perform. Here are some updated LLM benchmarks. Llama.cpp with Vulkan for Ubuntu is running pretty fast especially when you throw a [MoE](https://www.tensorops.ai/post/what-is-mixture-of-experts-llm) model at it. AMD Ryzen 7 [6800H](https://www.techpowerup.com/cpu-specs/ryzen-7-6800h.c2527) CPU with Radeon Graphics [680M](https://www.techpowerup.com/gpu-specs/radeon-680m.c3871) with 64GB [DDR5](https://en.wikipedia.org/wiki/DDR5_SDRAM) 4800 system RAM and [16GB for iGPU](https://www.reddit.com/r/ollama/comments/1lbpqln/minipc_ryzen_7_6800h_cpu_and_igpu_680m/). System running [Kubuntu](https://kubuntu.org/) 25.10 and [Mesa 25.1.7](https://docs.mesa3d.org/relnotes/25.1.7.html)\-1ubuntu1. [Release](https://github.com/ggml-org/llama.cpp/releases) llama.cpp [Vulkan](https://github.com/ggml-org/llama.cpp/releases/download/b6501/llama-b6478-bin-ubuntu-vulkan-x64.zip) build: 28c39da7 (6478) Using [llama-bench](https://github.com/ggml-org/llama.cpp/discussions/7195) sorted by [Parameter size](https://web.dev/articles/llm-sizes) |Model|Size GiB|Params B|pp512 t/s|tg128 t/s| |:-|:-|:-|:-|:-| |Phi-3.5-MoE-instruct-IQ4\_NL.gguf|21.99|41.87|95.58|16.04| |EXAONE-4.0-32B-Q4\_K\_M.gguf|18.01|32|30.4|2.88| |Qwen3-Coder-30B-A3B-Instruct-IQ4\_NL.gguf|16.12|30.53|150.73|30.06| |Qwen3-Coder-30B-A3B-Instruct-IQ4\_XS.gguf|15.25|30.53|140.24|28.41| |Qwen3-Coder-30B-A3B-Instruct-UD-Q5\_K\_XL.gguf|20.24|30.53|120.68|25.55| |M-MOE-4X7B-Dark-MultiVerse-UC-E32-24B-D\_AU-Q4\_k\_m.gguf|13.65|24.15|35.81|4.37| |ERNIE-4.5-21B-A3B-PT.i1-IQ4\_XS.gguf|10.89|21.83|176.99|30.29| |ERNIE-4.5-21B-A3B-PT-IQ4\_NL.gguf|11.52|21.83|196.39|29.95| |SmallThinker-21B-A3B-Instruct.IQ4\_XS.imatrix.gguf|10.78|21.51|155.94|26.12| |EuroLLM-9B-Instruct-IQ4\_XS.gguf|4.7|9.15|116.78|12.94| |EuroLLM-9B-Instruct-Q4\_K\_M.gguf|5.2|9.15|113.45|12.06| |EuroLLM-9B-Instruct-Q6\_K\_L.gguf|7.23|9.15|110.87|9.02| |DeepSeek-R1-0528-Qwen3-8B-IQ4\_XS.gguf|4.26|8.19|136.77|14.58| |Phi-mini-MoE-instruct-IQ2\_XS.gguf|2.67|7.65|347.45|61.27| |Phi-mini-MoE-instruct-Q4\_K\_M.gguf|4.65|7.65|294.85|40.51| |Qwen2.5-7B-Instruct.Q8\_0.gguf|7.54|7.62|256.57|8.74| |llama-2-7b.Q4\_0.gguf|3.56|6.74|279.81|16.72| |Phi-4-mini-instruct-Q4\_K\_M.gguf|2.31|3.84|275.75|25.02| |granite-3.1-3b-a800m-instruct\_f16.gguf|6.15|3.3|654.88|34.39|
2025-09-18T11:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1nk5df9/ryzen_6800h_igpu_680m_vulkan_benchmarks_llamacpp/
tabletuser_blogspot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk5df9
false
null
t3_1nk5df9
/r/LocalLLaMA/comments/1nk5df9/ryzen_6800h_igpu_680m_vulkan_benchmarks_llamacpp/
false
false
self
52
{'enabled': False, 'images': [{'id': 'N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?width=108&crop=smart&auto=webp&s=e7a9def323bc7d2cdfb2947d6ee9a677412df8a9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?width=216&crop=smart&auto=webp&s=8ad4864f4b07014b2bd9d44895c9a7aef6f301fa', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?width=320&crop=smart&auto=webp&s=86956236be361ca55c5b13d8f1e4ad6ccc92e3ef', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?width=640&crop=smart&auto=webp&s=7f045ac2247491d54ab8fb21de2606be256ae0d7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?width=960&crop=smart&auto=webp&s=f71176e8c303cabf1a65dd1403143b40ca105cbb', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?width=1080&crop=smart&auto=webp&s=b495924c789c25f73eca1374953806900babcd76', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/N8m4LslwoqQ9TuIgY6n_Q-uQRoMlEEesNSvqUsMhtzY.png?auto=webp&s=b94aa2cc1907fc6651fd4bca5275aa8bda598c39', 'width': 1200}, 'variants': {}}]}
Alternative to Transformer architecture LLMs
5
I wanted to ask if there are any other possible LLM architectures instead of this transformer. I need this for some light research purposes. I once saw a post on LinkedIn about some people working on a different kind of architecture for LLMs, but i lost that post. If someone can list such things it would be very helpful.
2025-09-18T11:04:49
https://www.reddit.com/r/LocalLLaMA/comments/1nk58yc/alternative_to_transformer_architecture_llms/
Charming_Barber_3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk58yc
false
null
t3_1nk58yc
/r/LocalLLaMA/comments/1nk58yc/alternative_to_transformer_architecture_llms/
false
false
self
5
null
Google Android RAG SDK – Quick Comparison Study
0
Last week I asked here if anyone knew of a reliable (and reputable) Android or iOS RAG SDK. I didn’t get much of a concrete response — maybe because there just aren’t many options out there yet. So, we went ahead and ran a quick comparison with Google’s Android RAG SDK: [https://ai.google.dev/edge/mediapipe/solutions/genai/rag/android](https://ai.google.dev/edge/mediapipe/solutions/genai/rag/android) On the Lihua World dataset, Google’s SDK reached about \~30% accuracy, while VecML’s RAG SDK reproduced our earlier results (roughly 75–85%, depending on the context window size). The attached plot shows the previous comparison as well, which was conducted on the our cloud [https://chat.vecml.com/](https://chat.vecml.com/) . That said, this comparison (30% versus 75%) might not be fully representative — from looking at the code, the current Google SDK release doesn’t seem optimized for performance yet. Still, we figured this info might be useful to share, since a lot of developers are probably on the lookout for solid RAG SDKs for Android/iOS/Windows/Mac.
2025-09-18T10:49:49
https://i.redd.it/otqdloa3lwpf1.png
DueKitchen3102
i.redd.it
1970-01-01T00:00:00
0
{}
1nk4z97
false
null
t3_1nk4z97
/r/LocalLLaMA/comments/1nk4z97/google_android_rag_sdk_quick_comparison_study/
false
false
default
0
{'enabled': True, 'images': [{'id': 'otqdloa3lwpf1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/otqdloa3lwpf1.png?width=108&crop=smart&auto=webp&s=4aceb6964690bb2ac99c95931f3e59efef2e7730', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/otqdloa3lwpf1.png?width=216&crop=smart&auto=webp&s=4e3ec4c71d5d241553b549ba96b0f8578686e865', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/otqdloa3lwpf1.png?width=320&crop=smart&auto=webp&s=b5c23901ce062f9761d594673c0f9b9935f1339d', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/otqdloa3lwpf1.png?width=640&crop=smart&auto=webp&s=4606f1dc8b699b62db3b10c14ba8c34b89eb0a39', 'width': 640}], 'source': {'height': 498, 'url': 'https://preview.redd.it/otqdloa3lwpf1.png?auto=webp&s=d0c24234ff7c7358f454757395adbdc7ea85d502', 'width': 873}, 'variants': {}}]}
Which local LLM for Macbook Pro with M4 Pro - 48GB RAM
6
I want to implement my first local LLM on my Macbook, but I'm very usure which one to pick. I'll mainly use it for programming, but want to handle basic everyday stuff as well. Was deciding between qwen3-coder or the new Magistral Small 2509. Any help is appreciated!
2025-09-18T10:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1nk4nna/which_local_llm_for_macbook_pro_with_m4_pro_48gb/
Forward-Field-5003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk4nna
false
null
t3_1nk4nna
/r/LocalLLaMA/comments/1nk4nna/which_local_llm_for_macbook_pro_with_m4_pro_48gb/
false
false
self
6
null
How to use openrouter GPT5 50% off?
0
Do we bring our own openai key and use it directly or recharge credits directly? Does the discount of 50% is applicable either way?
2025-09-18T10:28:53
https://www.reddit.com/r/LocalLLaMA/comments/1nk4m9t/how_to_use_openrouter_gpt5_50_off/
Old_Variety8975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk4m9t
false
null
t3_1nk4m9t
/r/LocalLLaMA/comments/1nk4m9t/how_to_use_openrouter_gpt5_50_off/
false
false
self
0
null
Is the current SOTA VLM Gemini 2.5 Pro? Or are there better open source options?
1
Is the current SOTA VLM Gemini 2.5 Pro? Or are there better open source options?
2025-09-18T10:24:42
https://www.reddit.com/r/LocalLLaMA/comments/1nk4jrw/is_the_current_sota_vlm_gemini_25_pro_or_are/
LivingMNML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk4jrw
false
null
t3_1nk4jrw
/r/LocalLLaMA/comments/1nk4jrw/is_the_current_sota_vlm_gemini_25_pro_or_are/
false
false
self
1
null
Is fine-tuning a VLM just like fine-tuning any other model?
5
I am new to computer vision and building an app that gets sports highlights from videos. The accuracy of Gemini 2.5 Flash is ok but I would like to make it even better. Does fine-tuning a VLM work just like fine-tuning any other model?
2025-09-18T10:23:39
https://www.reddit.com/r/LocalLLaMA/comments/1nk4j47/is_finetuning_a_vlm_just_like_finetuning_any/
LivingMNML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk4j47
false
null
t3_1nk4j47
/r/LocalLLaMA/comments/1nk4j47/is_finetuning_a_vlm_just_like_finetuning_any/
false
false
self
5
null
The A.I Money Machine
1
[removed]
2025-09-18T10:06:52
https://copecart.com/us/products/5d815dad/p/vasudev1102007
lohat123
copecart.com
1970-01-01T00:00:00
0
{}
1nk48xg
false
null
t3_1nk48xg
/r/LocalLLaMA/comments/1nk48xg/the_ai_money_machine/
false
false
default
1
null
Uncensored llm for iphone 8?
0
Currently im using pocketpal and im looking for an uncensored model that can run on an iphone 8(mostly for unlimited roleplaying) so any suggestions?
2025-09-18T10:06:06
https://www.reddit.com/r/LocalLLaMA/comments/1nk48gm/uncensored_llm_for_iphone_8/
Kiyumaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk48gm
false
null
t3_1nk48gm
/r/LocalLLaMA/comments/1nk48gm/uncensored_llm_for_iphone_8/
false
false
self
0
null
Hala Technical Report: Building Arabic-Centric Instruction & Translation Models at Scale
7
A series of state-of-the-art nano and small scale Arabic language models. would appreciate an upvote [https://huggingface.co/papers/2509.14008](https://huggingface.co/papers/2509.14008)
2025-09-18T09:25:14
https://www.reddit.com/r/LocalLLaMA/comments/1nk3l5q/hala_technical_report_building_arabiccentric/
LowChance4561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk3l5q
false
null
t3_1nk3l5q
/r/LocalLLaMA/comments/1nk3l5q/hala_technical_report_building_arabiccentric/
false
false
self
7
{'enabled': False, 'images': [{'id': 'wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?width=108&crop=smart&auto=webp&s=c85d8071186ab45c465a6563ff474860b74c8488', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?width=216&crop=smart&auto=webp&s=753a14b38a1654347456497f75902c4228fd7abb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?width=320&crop=smart&auto=webp&s=473c349f7756717b750a21503c39c3a3ec5b6b4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?width=640&crop=smart&auto=webp&s=0d3658006bde8235d00d6658e2db1ed63c095f5f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?width=960&crop=smart&auto=webp&s=6577fba274e85e29da46ae64442232c1885494f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?width=1080&crop=smart&auto=webp&s=ff60bb34eb49f8d49b0a054535eae73510c7523e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wSGNGJZbI6zXRztWW6RE5CZhHMIpNyuZojTUoLv9Ihk.png?auto=webp&s=3a80ef550af366cd364cf28b6c0e62010fef7f36', 'width': 1200}, 'variants': {}}]}
How to create a game with sound locally
0
https://reddit.com/link/1nk3ihf/video/nxqb7v312wpf1/player What we did here is: 1 Using Qwen3 14b or Gpt-oss 20b (which is very fast) or best Qwen3 30b coder to get a first preliminary flappy bird version. But that comes with flappy avatar square and very basic, (qwen3 30b design it better but still kinda minimal version). 2 We took and tested all the distilled version of [https://huggingface.co/BasedBase](https://huggingface.co/BasedBase) which has done a great job with it (Thank you). Turns out that the [https://huggingface.co/BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-V2-FP32](https://huggingface.co/BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-V2-FP32) which got GGUFED from Mradermacher : [https://huggingface.co/mradermacher/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-V2-FP32-GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-V2-FP32-GGUF) yesterday by my request (Thank you) can add some nice optimization to the game. We noticed that now sound is much better. This is a novel use for Local LLMs in general which opens endless possibilities to Ai application in real life. I can personally confirm that all the arcades now can be produced (with a bit of persuasion) Locally with affordable hardware. Worth mentioning that we used a Q4 LLM so the quality can still be improved a lot. TL;TS (to long to see) Min 5.04 the results of the coding. If some need the full code of the game let me know, I can put it in Github.
2025-09-18T09:20:19
https://www.reddit.com/r/LocalLLaMA/comments/1nk3ihf/how_to_create_a_game_with_sound_locally/
Trilogix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk3ihf
false
null
t3_1nk3ihf
/r/LocalLLaMA/comments/1nk3ihf/how_to_create_a_game_with_sound_locally/
false
false
self
0
{'enabled': False, 'images': [{'id': 'FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?width=108&crop=smart&auto=webp&s=f7395624aa4a3d36c7f226681316d07780f8741c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?width=216&crop=smart&auto=webp&s=154763642adec1bb2dc3f8336e853a3f8f9d1f94', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?width=320&crop=smart&auto=webp&s=17f8e5c1890c9150b578c7b68fdcbbceb8ec50f8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?width=640&crop=smart&auto=webp&s=57618fcee13950c18752c65544dd3c251243467e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?width=960&crop=smart&auto=webp&s=2b92a607fb270599aa573f0832f1f2e65c9463fa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?width=1080&crop=smart&auto=webp&s=f391e0363d5595a034d4d7ec93289f35ef6ce9eb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FCIUKBOTjdQ71444WNH0tQIlJml3lRqtw3qLXVevIx0.png?auto=webp&s=5c4fcf47db721e1fe416f83d09f946c11a881102', 'width': 1200}, 'variants': {}}]}
[Reopen] Fully local data analysis assistant (plus new Model)
1
\[*I posted this content the day before yesterday, but due to internal issues, I had deleted it. I’m now republishing it here.*\] Hi community! Today I’m releasing an open-source, fully local data analysis assistant along with a lightweight LLM trained for it, called [**quelmap**](https://quelmap.com) and **Lightning-4b**. LLMs are amazing, but handing over all your data to a major LLM provider isn’t how it should be. Nowadays, data analysis has relied on huge context windows and very large models. Instead, we tried to see if we could cover most common analysis tasks with an efficient XML-based output format and GRPO training. It even works smoothly on my **M4 MacBook Air (16GB)**. **Basic Features** 📊 Data visualization 🚀 Table joins 📈 Run statistical tests 📂 Unlimited rows, analyze 30+ tables at once 🐍 Built-in Python sandbox 🦙 Ollama or LM Studio API integration Lightning-4b is trained specifically for quelmap, and it’s been accurate and stable in generating structured outputs and Python code—more consistent than gpt-oss-120b or even Qwen3-235B in simple analysis tasks on quelmap. You can check the training details and performance here: 👉 [https://www.quelmap.com/lightning-4b/](https://www.quelmap.com/lightning-4b/) It’s not meant for writing complex research reports or high-level business advice like Gemini-DeepResearch. But I hope it can be a helpful tool for privacy-conscious analysts and beginners who just want to explore or analyze their data safely. All details, installation instructions, and source code are here: 🔗 Github: [https://github.com/quelmap-inc/quelmap](https://github.com/quelmap-inc/quelmap) 🔗 HuggingFace: [https://huggingface.co/quelmap/Lightning-4b](https://huggingface.co/quelmap/Lightning-4b) If people find this useful, I’d love to keep working on this project (agent mode, new models and more). Let me know what you think—I’d love to hear it.
2025-09-18T08:08:56
https://i.redd.it/t3dtxrm9svpf1.gif
mshintaro777
i.redd.it
1970-01-01T00:00:00
0
{}
1nk2g6g
false
null
t3_1nk2g6g
/r/LocalLLaMA/comments/1nk2g6g/reopen_fully_local_data_analysis_assistant_plus/
false
false
https://b.thumbs.redditm…Sqn7xNN74Hjw.jpg
1
{'enabled': True, 'images': [{'id': 'hL1NQnaqO9qRrhstKdNOdSANF7JMXldDGGZro4SoSH4', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=108&crop=smart&format=png8&s=81c83d56348b8ca2f1e98d54ff41144742b3ab9b', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=216&crop=smart&format=png8&s=548863341ca33e2dbc68ae9fd525692f3369c7bd', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=320&crop=smart&format=png8&s=026a54e63f1eba8486f45cfd4376b72e01c705d4', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=640&crop=smart&format=png8&s=8f61463ef4569084281a8f1399d45f22732e8fb6', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=960&crop=smart&format=png8&s=8685ee1f2a314b998cd4a3ebc0788f9078135ef2', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=1080&crop=smart&format=png8&s=299052e1ae05fe1ba6535217a8a9f8f751a9a71b', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?format=png8&s=99be38aa9ef87a085882c645f38360fead64c6fd', 'width': 3420}, 'variants': {'gif': {'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=108&crop=smart&s=3bfbb7a371e479569a7ec43039defd973f68ba1d', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=216&crop=smart&s=ed754245ab13dc577f4d644cece1e9306db50dfd', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=320&crop=smart&s=8208348ad160893d395789b7a1b13ef2682fa3e8', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=640&crop=smart&s=6ed54bee7e09a26f9d7dbc27f65c2204e3c00120', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=960&crop=smart&s=c58efa064758575dd8523070cf071d293cc6594b', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=1080&crop=smart&s=074d496792829f82a323affa2e5b3f79a16a2474', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?s=f2f6d479ba3120640d70285ea5ebb79c00cbb154', 'width': 3420}}, 'mp4': {'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=108&format=mp4&s=79b7f80f0008f4c064f31d51b9bef7deeae4a2e8', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=216&format=mp4&s=877c1b8ff7340caa7b37122c42303d83d7ea4fba', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=320&format=mp4&s=94626e81dd82ca6642a56dfff0beae12fd9b4916', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=640&format=mp4&s=71f2f28637f0186843a65094189adc2daa24a98e', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=960&format=mp4&s=599b887b532dcf2ffb04b48a70197e7df31c97ae', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?width=1080&format=mp4&s=3f527fede770a17ec063659f1fcf947738b8bd6e', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/t3dtxrm9svpf1.gif?format=mp4&s=49afcf98bf44e9f962f0b4882cd2f842add0b2ab', 'width': 3420}}}}]}
MLX model NOT downloading on mobile/cellular data
0
Hi, A bit of an odd one that i am facing. So i have an iOS app i am working on right now, which loads an LLM on an iPhone. My app when it loads on an iPhone, starts downloading the LLM model but only when the phone is on a Wifi connection. When the phone is not connected to Wifi, and though the phone has stable mobile data connectivity, the model doesn't download and the error i see being thrown is: offlineModeError("No files available locally for this repository") I have tried everything but I have not been able to make this work. Any tips will be appreciated? PS: I have done the obvious of allowing mobile data for the app.
2025-09-18T08:07:33
https://www.reddit.com/r/LocalLLaMA/comments/1nk2fga/mlx_model_not_downloading_on_mobilecellular_data/
sylvesterdsouza
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk2fga
false
null
t3_1nk2fga
/r/LocalLLaMA/comments/1nk2fga/mlx_model_not_downloading_on_mobilecellular_data/
false
false
self
0
null
opinion on this config machine for local LLM?
0
i'm not planning to on gaming, and i'm not entirely sure about fine differences in storage and memory components, sorta of leaning towards either double txt 7900xtx or one 5090, also not sure about how many fan to use https://preview.redd.it/w9wqfrdjrvpf1.png?width=1041&format=png&auto=webp&s=44e708ea7ba9ce1ca2a193ec7fcde3af313c86ee
2025-09-18T08:05:12
https://www.reddit.com/r/LocalLLaMA/comments/1nk2e4b/opinion_on_this_config_machine_for_local_llm/
emaayan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk2e4b
false
null
t3_1nk2e4b
/r/LocalLLaMA/comments/1nk2e4b/opinion_on_this_config_machine_for_local_llm/
false
false
https://b.thumbs.redditm…YtqvU0AXEqys.jpg
0
null
SIngle VS double GPU: Why was it worst ?
2
Hey! I was playing around with AI in LM Studio. My wife has the same GPU as me, so I tried adding both to my PC. Here’s how it went in LM Studio (Hope posting this here is fine). https://preview.redd.it/l8mtn1nipvpf1.png?width=1082&format=png&auto=webp&s=a079bc9c7afb66bb67f8f855e05f649125e6eca9 And I tried the ‘new’ GPT-OSS 20B model with the default settings. On double GPU enabled: https://preview.redd.it/zkvlql1opvpf1.png?width=423&format=png&auto=webp&s=d1fb2638cdcc4c38ec5e70dea9bbe821a63735fc On single GPU: https://preview.redd.it/em8ldmnppvpf1.png?width=450&format=png&auto=webp&s=e4e996c69e6be9e906ee4ae0429f424601ad4da7 for the same prompt. I think it’s normal not to get the same results with the same prompt. But +1.5s for the first token and +15 tok/sec seems like a lot to me. (I did a bit more testing, but got the same results.) This still feels a bit off. Any ideas to help explain or understand why?
2025-09-18T07:58:58
https://www.reddit.com/r/LocalLLaMA/comments/1nk2an9/single_vs_double_gpu_why_was_it_worst/
Manoelnb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk2an9
false
null
t3_1nk2an9
/r/LocalLLaMA/comments/1nk2an9/single_vs_double_gpu_why_was_it_worst/
false
false
https://b.thumbs.redditm…2xPLuXlIwwLY.jpg
2
null
i want a help to make my personal AI
0
i want a easier way without coding it to make a AI for my personal use only on my pc in which i can integration AI's that i want to add in it and i can upload my own documents , pdfs , etc to answer from my uploads and AI answer .
2025-09-18T07:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1nk1vju/i_want_a_help_to_make_my_personal_ai/
Leading-Sugar6593
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk1vju
false
null
t3_1nk1vju
/r/LocalLLaMA/comments/1nk1vju/i_want_a_help_to_make_my_personal_ai/
false
false
self
0
null
A first stab at packaging llama.cpp in a performance-optimized manner
31
llama.cpp has been a real enabler to get access to LLMs locally. However, one feedback that has come up regularly is that the package isn't easy to install, and, especially so if trying to do so in a performance-optimized manner taking advantage of one's hardware. There's a very active discussion on the topic over on llama.cpp's GitHub ([#15313](https://github.com/ggml-org/llama.cpp/discussions/15313)). We've taken a first stab at implementing a performance-optimized packaging solution, so that it's easily installable and takes advantage of the feature flags your hardware provides (see attached pic). While still a WIP, it's working on Linux (cpu/cuda) now, we'll follow-up with Metal, and finally Windows. The idea is to build the basis of a system that is easy to be iterated upon by the community.
2025-09-18T07:28:24
https://i.redd.it/xxno8cwokvpf1.png
jikkii
i.redd.it
1970-01-01T00:00:00
0
{}
1nk1tz2
false
null
t3_1nk1tz2
/r/LocalLLaMA/comments/1nk1tz2/a_first_stab_at_packaging_llamacpp_in_a/
false
false
default
31
{'enabled': True, 'images': [{'id': 'xxno8cwokvpf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?width=108&crop=smart&auto=webp&s=3ad333c28b5dc70483c7e0b55d0be1eb74f7d653', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?width=216&crop=smart&auto=webp&s=e215b794d5bdff555e0cd6bae046f99e24fc8034', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?width=320&crop=smart&auto=webp&s=7f71964586b2d8cdd3d78f484f7af54c2e501e90', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?width=640&crop=smart&auto=webp&s=c56cc3e0047565fcd6f49be7615fe04e6c509cb0', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?width=960&crop=smart&auto=webp&s=2f8b3fb7c3d32d578c0d522769e68519254c5206', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?width=1080&crop=smart&auto=webp&s=752c8fec65e69d1b50d7e4b2e1bc5cc61d11e1d0', 'width': 1080}], 'source': {'height': 1228, 'url': 'https://preview.redd.it/xxno8cwokvpf1.png?auto=webp&s=399ed63aa8241a20a9669fe06c476abfbcca0266', 'width': 2184}, 'variants': {}}]}
I just made VRAM approximation tool for LLM
89
I built a simple tool to estimate how much memory is needed to run GGUF models locally, based on your desired maximum context size. You just paste the direct download URL of a GGUF model (for example, from Hugging Face), enter the context length you plan to use, and it will give you an approximate memory requirement. It’s especially useful if you're trying to figure out whether a model will fit in your available VRAM or RAM, or when comparing different quantization levels like Q4_K_M vs Q8_0. The tool is completely free and open-source. You can try it here: https://model-memory.kolosalai.com And check out the code on GitHub: https://github.com/KolosalAI/model-memory-calculator I'd really appreciate any feedback, suggestions, or bug reports if you decide to give it a try.
2025-09-18T07:09:12
https://www.reddit.com/r/LocalLLaMA/comments/1nk1jbc/i_just_made_vram_approximation_tool_for_llm/
SmilingGen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk1jbc
false
null
t3_1nk1jbc
/r/LocalLLaMA/comments/1nk1jbc/i_just_made_vram_approximation_tool_for_llm/
false
false
self
89
null
Qwen CLI is great (2,000 free requests a day)
0
ERROR: type should be string, got "https://preview.redd.it/p1881o8lgvpf1.png?width=1920&format=png&auto=webp&s=ef7df33b7916e534e3b7264d8a83fd01373ea5ea\n\nPro tip: Keep the context under 95% or a maximum of 90% for awsome results"
2025-09-18T07:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1nk1gym/qwen_cli_is_great_2000_free_requests_a_day/
Adventurous-Slide776
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk1gym
false
null
t3_1nk1gym
/r/LocalLLaMA/comments/1nk1gym/qwen_cli_is_great_2000_free_requests_a_day/
false
false
nsfw
0
null
GGUF model memory approximation tool
1
[removed]
2025-09-18T06:54:42
https://www.kolosal.ai/memory-calculator
Expensive_Ad_1945
kolosal.ai
1970-01-01T00:00:00
0
{}
1nk1avl
false
null
t3_1nk1avl
/r/LocalLLaMA/comments/1nk1avl/gguf_model_memory_approximation_tool/
false
false
default
1
null
Any resources on implementing “memory” like ChatGPT
17
I’m trying to understand how systems like ChatGPT handle their “memory” feature. I don’t mean RAG , where documents are chunked and queried, but more of a lightweight, vague memory that stores facts and surfaces them only when relevant in later conversations. Is there any blog, paper, or open-source implementation that explains how to design and implement something like this? Basically: • How to decide what to store vs ignore • How to retrieve only when it’s contextually useful • How to keep it lightweight instead of doing full-blown vector DB lookups for everything Would love to dive deeper if anyone has resources, papers, or even experimental repos!
2025-09-18T06:44:50
https://www.reddit.com/r/LocalLLaMA/comments/1nk1504/any_resources_on_implementing_memory_like_chatgpt/
DataScientia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk1504
false
null
t3_1nk1504
/r/LocalLLaMA/comments/1nk1504/any_resources_on_implementing_memory_like_chatgpt/
false
false
self
17
null
Why does mobile AI inference engines only optimize for high-end phones?
1
Every time a new post about running AI on phones drop, benchmarks are on high-end phones with top chips. 83% of my app users do not own high-end phones, so who exactly are they building for? It kinda makes the innovations meaningless now. Maybe they should stew a bit more. I feel its all for show. My app used 1B models and most phones struggled with this. [ONNX](https://github.com/onnx/onnx), [Executorch](https://github.com/pytorch/executorch), [LiteRT](https://github.com/google-ai-edge/LiteRT) were not fast under real-world circumstances, at least compared to Llama.cpp, but Llama.cpp still struggled with 1B model on budget phones, which is understandable, given the focus on computers with GPUs, much better for business (Zoom, Slack, enterprise platforms run on computers). Google AI Edge and Apple Foundation Frameworks are good for Android & iOS respectively, great for Kotlin & Swift devs, but I'm a Flutter/React dev and cannot maintain multiple AI workflow codebases. They are still slow compared to Llama.cpp anyways. Its much less work to use Open Router. I tried Llama.cpp but users complained about battery drain and heat-up, especially with GPUs, my app rating lost 1 star on app stores for this. But I use this for my desktop apps. There are new ones like Nexa which depends on llama.cpp but recently going hard on NPUs. There's another called Uzu, which is just another MLX. I feel these new ones are geared towards businesses, not the community, good for them. I tried [Cactus](https://github.com/cactus-compute/cactus) which was supposedly built for all smartphones and cross-platform, seems to be fastest and most energy-efficient, but it does not support some models unless I'm willing to implement them myself using Cactus Graph. Seems to be 1 guy writing everything, he's very good, but DeepMind or Meta or xAI should buy them and scale it's development. OpenAI too since they are going for AI-native hardware. Sorry for sounding dismissive, and not to put anyone down or anything, maybe this is just me venting about losing app store stars, but seriously, I want to know, why do people even bother? Please educate me!
2025-09-18T06:22:01
https://www.reddit.com/r/LocalLLaMA/comments/1nk0rgz/why_does_mobile_ai_inference_engines_only/
AkhtarHannah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk0rgz
false
null
t3_1nk0rgz
/r/LocalLLaMA/comments/1nk0rgz/why_does_mobile_ai_inference_engines_only/
false
false
self
1
null
Why are mobile AI projects focused on high-end phones? Most phones in the wild do not have NPU and Modern Chips etc.
1
[removed]
2025-09-18T06:17:48
https://www.reddit.com/r/LocalLLaMA/comments/1nk0p2e/why_are_mobile_ai_projects_focused_on_highend/
AkhtarHannah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk0p2e
false
null
t3_1nk0p2e
/r/LocalLLaMA/comments/1nk0p2e/why_are_mobile_ai_projects_focused_on_highend/
false
false
self
1
null
Question about models updates
1
When new models are released I donwload them and put them in my repository, keepimg the models i like for my workflow. Then i read that some model got improved or updated!? Does that mean that I need to download new model file?
2025-09-18T06:08:29
https://www.reddit.com/r/LocalLLaMA/comments/1nk0jjr/question_about_models_updates/
Right_Weird9850
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nk0jjr
false
null
t3_1nk0jjr
/r/LocalLLaMA/comments/1nk0jjr/question_about_models_updates/
false
false
self
1
null
VoxCPM 0.5B : Tokenizer-Free TTS and Voice Cloning
40
It runs on MiniCPM-4 (0.5B params) and actually sounds expressive: prosody flows naturally, and it can clone a voice from just a short sample. It’s also practical: real-time streaming with RTF \~0.17 on a consumer GPU (RTX 4090). Trained on 1.8M hours of English + Chinese data, and the best part: fully open-sourced under Apache-2.0. HuggingFace : [https://huggingface.co/openbmb/VoxCPM-0.5B](https://huggingface.co/openbmb/VoxCPM-0.5B) Video : [https://youtu.be/HO3tuuEuhTw?si=2iFA5ApaCPD6yUWj](https://youtu.be/HO3tuuEuhTw?si=2iFA5ApaCPD6yUWj)
2025-09-18T05:31:23
https://www.reddit.com/r/LocalLLaMA/comments/1njzxmx/voxcpm_05b_tokenizerfree_tts_and_voice_cloning/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njzxmx
false
null
t3_1njzxmx
/r/LocalLLaMA/comments/1njzxmx/voxcpm_05b_tokenizerfree_tts_and_voice_cloning/
false
false
self
40
{'enabled': False, 'images': [{'id': 'r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=108&crop=smart&auto=webp&s=f4de9044cf40d797de876cb88486cf64eeb751b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=216&crop=smart&auto=webp&s=6cc73426288ce8f26d075271d1c82226fafaf5a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=320&crop=smart&auto=webp&s=b89517445ea0178739330ac6d93e413de703f8b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=640&crop=smart&auto=webp&s=c21b8a7fb420d7443519db438480fdc9bd7c71a4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=960&crop=smart&auto=webp&s=4605695ffd89b219463460aae2356d6717006600', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=1080&crop=smart&auto=webp&s=0635b1fc4300acb882f8303ac96987fdf349b275', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?auto=webp&s=9cf3fd16647e9f410c5cf3b868b320962ecd69d9', 'width': 1200}, 'variants': {}}]}
Need help
0
The idea is to build a multilingual chatbot that can: Educate rural & semi-urban populations about preventive healthcare Spread awareness about disease symptoms Provide vaccination schedules & reminders Work on low internet / SMS / IVR support Create a chatbotPlease suggest Provide vaccination schedules & reminders Work on low internet / SMS / IVR support We're a 6-member team (Frontend, Backend, ML, Data, Testing, Presentation). Right now, we're exploring Al tools, APIs & frameworks that can help us build this efficiently. Could you guys suggest: 1. Best NLP/LLM tools (for multilingual chatbot) 2. Speech-to-text / Text-to-speech APIs (preferably free or low-cost for Hindi & regional languages) 3. Any open-source healthcare datasets / APIs for preventive care & vaccination 4. Tools for SMS/IVR integration in rural areas 5. Anything unique we can add to make our project stand out We want to keep it simple for prototype, but scalable for real-world impact. Any suggestions or tool recommendations would be super helpful Thanks in advance, Please help me in this I also making this so would you please guide regarding this
2025-09-18T04:42:27
https://www.reddit.com/r/LocalLLaMA/comments/1njz30o/need_help/
Alternative-Ruin528
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njz30o
false
null
t3_1njz30o
/r/LocalLLaMA/comments/1njz30o/need_help/
false
false
self
0
null
Help: looking for a fast web search option with chat
6
Hey y'all. So I tried Ollama web search and was very impressed with how web searches were implemented in the reasoning chain. I've tried a few alternatives and noticed some caveats for my use case: 1. Openwebui web search via tool use: unreliable and it seems like the community made tools are less stable. 1.b openwebui via web search: very complete but extremely slow. 2. Perplexica: fast results, but no chat results. 3. Ollama: great searching, but no webgui accounts so I can serve multiple people in my household. Any suggestions? I need it to support accounts like openwebui but need it to search fast.
2025-09-18T03:19:10
https://www.reddit.com/r/LocalLLaMA/comments/1njxjy5/help_looking_for_a_fast_web_search_option_with/
My_Unbiased_Opinion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njxjy5
false
null
t3_1njxjy5
/r/LocalLLaMA/comments/1njxjy5/help_looking_for_a_fast_web_search_option_with/
false
false
self
6
null
Made a pre-flight check for RAG projects - thoughts?
2
I've been seeing a lot of RAG projects fail for predictable reasons (structured data, calculation queries, etc), so I built a tool that analyzes your docs/queries upfront to predict if RAG will actually work. It's basically a compatibility checker that tells you: \- If your documents will work with RAG (tables/Excel = bad) \- If your queries are RAG-compatible (math = impossible) \- Rough cost estimates GitHub: [https://github.com/ragnostics/ragnostics-tool](https://github.com/ragnostics/ragnostics-tool) The tool is rough and probably too pessimistic. I'm wondering: 1. Is this actually useful or am I solving a non-problem? 2. What other failure patterns should it check for? 3. Are my assumptions about RAG limitations outdated? There's a paid version with more features, but honestly I'm more interested in whether the core concept is even valuable. Would you use something like this before starting a RAG project?
2025-09-18T03:08:30
https://www.reddit.com/r/LocalLLaMA/comments/1njxcmi/made_a_preflight_check_for_rag_projects_thoughts/
me_z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njxcmi
false
null
t3_1njxcmi
/r/LocalLLaMA/comments/1njxcmi/made_a_preflight_check_for_rag_projects_thoughts/
false
false
self
2
{'enabled': False, 'images': [{'id': 'EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?width=108&crop=smart&auto=webp&s=c119ec9dd6773e7c32c26e700582508718852e0c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?width=216&crop=smart&auto=webp&s=69b4627da2d7230b56e2a7d39725958e5a555b3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?width=320&crop=smart&auto=webp&s=253371e2660929932594ea8556605e362e69d595', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?width=640&crop=smart&auto=webp&s=88d59806acca3c80bdcae5e4ad0acf0571255020', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?width=960&crop=smart&auto=webp&s=08dad258cb1e30ac7201cb7c257ad127b7b6b6d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?width=1080&crop=smart&auto=webp&s=5a251532c6b24444a0d99ae8cab9401e3e294c0b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EbX07_n5-VvYOa4v6R-fJLLlgpCdmiSQ2SQ6Jd3NxYY.png?auto=webp&s=38767368ec568dd1c78b1081536890e3f444401c', 'width': 1200}, 'variants': {}}]}
What’s the training cost for models like Qwen3 coder 30b and is the code for training it is open source or close source?
8
Is it also possible to grab qwen3 coder 4b and train it again on more and new data?
2025-09-18T02:40:19
https://www.reddit.com/r/LocalLLaMA/comments/1njws3n/whats_the_training_cost_for_models_like_qwen3/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njws3n
false
null
t3_1njws3n
/r/LocalLLaMA/comments/1njws3n/whats_the_training_cost_for_models_like_qwen3/
false
false
self
8
null
Google's paper, SLED, seems to improve factuality with (all? Most?) LLMs at only a 4% speed penalty
83
ERROR: type should be string, got "https://research.google/blog/making-llms-more-accurate-by-using-all-of-their-layers/\n\nThis paper put out a year or so ago, and referenced by today's blog post, shows a method for decoding using the weighted average of every layer's logits. It improves factuality over DoLa (which itself improves over just standard sampling?) by anywhere from 2-16%with only a 4% hit to speed! I'm surprised I haven't seen this here since it seems like it shouldn't be too bad to implement into something like VLLM or llama.cpp, and it seems to work for many different models."
2025-09-18T02:33:14
https://www.reddit.com/r/LocalLLaMA/comments/1njwmtl/googles_paper_sled_seems_to_improve_factuality/
laser_man6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njwmtl
false
null
t3_1njwmtl
/r/LocalLLaMA/comments/1njwmtl/googles_paper_sled_seems_to_improve_factuality/
false
false
self
83
{'enabled': False, 'images': [{'id': 'Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=108&crop=smart&auto=webp&s=e85522ec0f6b9c59a8434a90d2ecebe8c2d71652', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=216&crop=smart&auto=webp&s=7456a0a4ebd37982129042b9b4aaa1a14401a280', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=320&crop=smart&auto=webp&s=0b4b0f3f5d7fb66280168c071659b8dfbc9f2f75', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=640&crop=smart&auto=webp&s=c9dad5b13e20f57d64f5fc0bbc7415c9f4186b1d', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?auto=webp&s=722aaac4c4cb8a58930bb43bac788a1400ae000c', 'width': 800}, 'variants': {}}]}
Every SOTA on its own data
24
Feels like every new RAG paper shows *huge gains*… but always on their own curated dataset. Once you swap in messy PDFs, private notes, or latency-sensitive use cases, the story changes fast. Anyone here actually compared different RAG flavors side by side? (multi-hop vs. rerankers, retrieval-aug agents vs. lightweight hybrids, etc.) What did you find in practice — stability, speed, or truthfulness? Would love to hear war stories from real deployments, not just benchmark tables.
2025-09-18T02:28:54
https://www.reddit.com/r/LocalLLaMA/comments/1njwjmd/every_sota_on_its_own_data/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njwjmd
false
null
t3_1njwjmd
/r/LocalLLaMA/comments/1njwjmd/every_sota_on_its_own_data/
false
false
self
24
null
When AI admits it doesn't know: the surprising behavior of Magistral-small-2509 in a soccer quiz contest
49
I was surprised by something about magistral-small-2509. Let's say I have a little personal test that consists of a kind of alphabetical wheel with soccer questions. They are usually historical and somewhat obscure questions. Normally, I don't get 100% correct in many LLMs, although in some I come close. The thing is, magistral came very close to 100%, but what really surprised me is that in the chain of reasoning, it couldn't find the correct answer and decided to go for “pass,” which is the wild card it can use if it doesn't know the answer. In other tests with other LLMs, I don't usually see this; they are very reluctant to admit that they don't know the answer. However, I know I need to do more testing, but I thought it was interesting to share. Here is the piece of the chain of thought (Note: My prompt is in Spanish, which is my native language, which is why the chain of reasoning occurs in Spanish). `K: Ex futbolista alemán de doble nacionalidad panameña y brasileña que jugó como delantero en clubes como el Stuttgart o el Schalke 04.` `Pienso en jugadores alemanes con doble nacionalidad. ¿Klodins? No, no me suena. ¿Kalle? No. ¿Klaassen? No es alemán. ¿Klose? No tiene doble nacionalidad panameña y brasileña. ¿Kroos? No. ¿Kahn? No. ¿Khedira? No. ¿Kross? No me suena.` `Tal vez sea Klose, pero no encaja la doble nacionalidad. ¿Kluivert? No es alemán. ¿Kun? Kun Agüero es argentino, no alemán. ¿Kolasinac? No es alemán. ¿Kimmich? No tiene esa doble nacionalidad.` `¿Podría ser Kaka? Pero Kaka es brasileño, no alemán. ¿Klarke? No me suena. ¿Keller? No me parece que tenga doble nacionalidad panameña y brasileña.` `Tal vez sea Kevin Kurányi. Es alemán, pero no estoy seguro de la doble nacionalidad. ¿Kevin Prince Boateng? Es alemán con nacionalidad ghanesa, no panameña y brasileña. ¿Klose? No encaja.` `¿Podría ser Klaus? No me suena. ¿Kopila? No me suena. ¿Kramaric? Croata, no alemán.` `Si no se me ocurre, puedo decir "Pasapalabra".` `Pero antes de rendirme, recuerdo que hay un jugador llamado Klose, pero no encaja. Otro nombre: Kluivert es neerlandés. ¿Kroos? No.` `Espera, ¿podría ser Kevin Dyer? No me suena. ¿Kevin Kampl? Es esloveno.` `Creo que no lo sé. Pasapalabra para K.`
2025-09-18T01:57:33
https://www.reddit.com/r/LocalLLaMA/comments/1njvw9b/when_ai_admits_it_doesnt_know_the_surprising/
Different_File6723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njvw9b
false
null
t3_1njvw9b
/r/LocalLLaMA/comments/1njvw9b/when_ai_admits_it_doesnt_know_the_surprising/
false
false
self
49
null
What is the best local LLM to ask questions about homework, physics, biology, math, and school stuff?
0
Hello, I'm currently looking for an AI without internet for school for math, biology, chemistry, physics and things like that. Is there one that can answer things like, for example, asking what MUV and MUR are and that generates a 1-page essay for me?
2025-09-18T01:45:13
https://www.reddit.com/r/LocalLLaMA/comments/1njvmv9/what_is_the_best_local_llm_to_ask_questions_about/
Indohorneado
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njvmv9
false
null
t3_1njvmv9
/r/LocalLLaMA/comments/1njvmv9/what_is_the_best_local_llm_to_ask_questions_about/
false
false
self
0
null
Is it possible for different brand GPUs to work together?
2
I have an Arc B580 and a GTX 1650. I plan to get a new motherboard with 2 pcie slots and use both cards. Is it possible to get both gpus to work together? Right now I use qwen2.5-coder:14b and nomic-embed-text:v1.5 through ollama and I use tabby as code completion tool. \\ I added 4 repositories as context providers and 1 whole javadoc on tabby and my 12Gb VRAM gets filled up pretty quick. I make minecraft plugins, so i have to keep the game open to see what i am doing, but i have to keep it at 800x600 to not to pass the 12Gb VRAM, but sometimes i need a second minecraft instance, but i cant open it because my VRAM is already being 100% used and i open it the screen freezes and i have to kill some stuff. \\ If it is possible to make different brand gpus to work together, i would make minecraft to use the 1650 and use AI on the B580. I am on Ubuntu 25.04 and I am using ollama right now >!i have seen some people saying stuff in the lines of "you use ollama? lol", but i dont get it. Is ollama bad? i like it because i can use its cli to easily manage the models, and some days ago i tried to run a llama.cpp container made for intel gpus, but the performance there was worse than ollama!<
2025-09-18T00:48:54
https://www.reddit.com/r/LocalLLaMA/comments/1njufu3/is_it_possible_for_different_brand_gpus_to_work/
WizardlyBump17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njufu3
false
null
t3_1njufu3
/r/LocalLLaMA/comments/1njufu3/is_it_possible_for_different_brand_gpus_to_work/
false
false
self
2
null
DeepSeek-R1 on Nature: How Pure Reinforcement Learning Unlocks LLM Reasoning
1
Hey everyone, Big news in the AI world today—**DeepSeek-R1** is featured on the cover of *Nature*! This is a significant milestone for reinforcement learning and reasoning in large language models. Here’s what makes this groundbreaking: ### 🧠 Pure Reinforcement Learning Breakthrough - DeepSeek-R1 is the **first model** to achieve state-of-the-art reasoning **without any supervised fine-tuning (SFT)**. - It uses **Group Relative Policy Optimization (GRPO)**, a novel RL method that reduces computational cost while maintaining high performance. - The model **autonomously developed** advanced reasoning strategies like self-reflection, verification, and dynamic adaptation—all through RL, **without human demonstrations**. ### 🏆 Top-Tier Performance - **AIME 2024**: - `pass@1`: **77.9%** → with self-consistency: **86.7%** (surpassing human average) - **MATH-500**: **97.3%** (pass@1) - **Codeforces Rating**: **2029** (Top 5% globally) - Also excels in biology, physics, chemistry, and broader benchmarks like MMLU-Pro (**84.0%**), AlpacaEval 2.0 (**87.6%**), and Arena-Hard (**92.3%**) ### 🔍 Emergent Reasoning Behaviors During training, the model showed: - **Self-correction**: “Aha moments” where it reevaluated its reasoning (e.g., sudden increase in the word “wait”) - **Long-chain reasoning**: Generating hundreds to thousands of tokens to solve complex problems - **Adaptive token usage**: Using more tokens for hard problems, fewer for easy ones ### 🌍 Open Research & Model Release DeepSeek has released: - **DeepSeek-R1-Zero** (pure RL version) - **DeepSeek-R1** (multistage RL + SFT for alignment) - **Distilled smaller models** for broader accessibility - All **code, weights, and data** under MIT license ### 📌 Limitations & Future Work The model still has room for improvement in: - Tool use (e.g., calculators, search) - Token efficiency (sometimes overthinks) - Language mixing (optimized for EN/ZH only) - Prompt sensitivity (works best zero-shot) But the work proves that **pure RL can unlock reasoning** without human data—paving the way for more autonomous, self-improving AI. **Paper & Resources:** - [Nature Article](https://www.nature.com/articles/s41586-025-09422-z) - [GitHub Repo](https://github.com/deepseek-ai/DeepSeek-R1) - [Hugging Face](https://huggingface.co/DeepSeek-ai) What do you think? Is pure RL the future of LLM training?
2025-09-17T23:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1njtc2p/deepseekr1_on_nature_how_pure_reinforcement/
First_Ground_9849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njtc2p
false
null
t3_1njtc2p
/r/LocalLLaMA/comments/1njtc2p/deepseekr1_on_nature_how_pure_reinforcement/
false
false
self
1
{'enabled': False, 'images': [{'id': '15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=108&crop=smart&auto=webp&s=fc3318cb2df78984075b6714f9df7277caceda60', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=216&crop=smart&auto=webp&s=2e95f3437e10b394c63568df9f3479b45b8bc4ec', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=320&crop=smart&auto=webp&s=2d2a66fcb11ee94db580c39369f32217463cc9b4', 'width': 320}, {'height': 277, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=640&crop=smart&auto=webp&s=842b4c95d2aa8f1de9ddc7ffa7efbbed35663107', 'width': 640}], 'source': {'height': 297, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?auto=webp&s=801d961f59fc5aac4eed36a9a58037d9b4084e91', 'width': 685}, 'variants': {}}]}
DeepSeek-R1 on Nature: How Pure Reinforcement Learning Unlocks LLM Reasoning
103
Hey everyone, Big news in the AI world today—\*\*DeepSeek-R1\*\* is featured on the cover of \*Nature\*! This is a significant milestone for reinforcement learning and reasoning in large language models. Here’s what makes this groundbreaking: \### 🧠 Pure Reinforcement Learning Breakthrough \- DeepSeek-R1 is the \*\*first model\*\* to achieve state-of-the-art reasoning \*\*without any supervised fine-tuning (SFT)\*\*. \- It uses \*\*Group Relative Policy Optimization (GRPO)\*\*, a novel RL method that reduces computational cost while maintaining high performance. \- The model \*\*autonomously developed\*\* advanced reasoning strategies like self-reflection, verification, and dynamic adaptation—all through RL, \*\*without human demonstrations\*\*. \### 🏆 Top-Tier Performance \- \*\*AIME 2024\*\*: \- \`pass@1\`: \*\*77.9%\*\* → with self-consistency: \*\*86.7%\*\* (surpassing human average) \- \*\*MATH-500\*\*: \*\*97.3%\*\* (pass@1) \- \*\*Codeforces Rating\*\*: \*\*2029\*\* (Top 5% globally) \- Also excels in biology, physics, chemistry, and broader benchmarks like MMLU-Pro (\*\*84.0%\*\*), AlpacaEval 2.0 (\*\*87.6%\*\*), and Arena-Hard (\*\*92.3%\*\*) \### 🔍 Emergent Reasoning Behaviors During training, the model showed: \- \*\*Self-correction\*\*: “Aha moments” where it reevaluated its reasoning (e.g., sudden increase in the word “wait”) \- \*\*Long-chain reasoning\*\*: Generating hundreds to thousands of tokens to solve complex problems \- \*\*Adaptive token usage\*\*: Using more tokens for hard problems, fewer for easy ones \### 🌍 Open Research & Model Release DeepSeek has released: \- \*\*DeepSeek-R1-Zero\*\* (pure RL version) \- \*\*DeepSeek-R1\*\* (multistage RL + SFT for alignment) \- \*\*Distilled smaller models\*\* for broader accessibility \- All \*\*code, weights, and data\*\* under MIT license \### 📌 Limitations & Future Work The model still has room for improvement in: \- Tool use (e.g., calculators, search) \- Token efficiency (sometimes overthinks) \- Language mixing (optimized for EN/ZH only) \- Prompt sensitivity (works best zero-shot) But the work proves that \*\*pure RL can unlock reasoning\*\* without human data—paving the way for more autonomous, self-improving AI. \*\*Paper & Resources:\*\* \- \[Nature Article\](https://www.nature.com/articles/s41586-025-09422-z) \- \[GitHub Repo\](https://github.com/deepseek-ai/DeepSeek-R1) \- \[Hugging Face\](https://huggingface.co/DeepSeek-ai) What do you think? Is pure RL the future of LLM training?
2025-09-17T23:50:20
https://www.reddit.com/r/LocalLLaMA/comments/1njt6ut/deepseekr1_on_nature_how_pure_reinforcement/
First_Ground_9849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njt6ut
false
null
t3_1njt6ut
/r/LocalLLaMA/comments/1njt6ut/deepseekr1_on_nature_how_pure_reinforcement/
false
false
self
103
{'enabled': False, 'images': [{'id': '15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=108&crop=smart&auto=webp&s=fc3318cb2df78984075b6714f9df7277caceda60', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=216&crop=smart&auto=webp&s=2e95f3437e10b394c63568df9f3479b45b8bc4ec', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=320&crop=smart&auto=webp&s=2d2a66fcb11ee94db580c39369f32217463cc9b4', 'width': 320}, {'height': 277, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=640&crop=smart&auto=webp&s=842b4c95d2aa8f1de9ddc7ffa7efbbed35663107', 'width': 640}], 'source': {'height': 297, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?auto=webp&s=801d961f59fc5aac4eed36a9a58037d9b4084e91', 'width': 685}, 'variants': {}}]}
LLM shows signs of over cautious, which has very serious consequences
6
[https://arxiv.org/pdf/2508.17472](https://arxiv.org/pdf/2508.17472) Qwen is the model did the best (lest over cautious) and Gemini, not surprisingly, did the worst
2025-09-17T23:42:26
https://www.reddit.com/r/LocalLLaMA/comments/1njt0m7/llm_shows_signs_of_over_cautious_which_has_very/
Striking-Warning9533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njt0m7
false
null
t3_1njt0m7
/r/LocalLLaMA/comments/1njt0m7/llm_shows_signs_of_over_cautious_which_has_very/
false
false
self
6
null
🍎 universal metal-flash-attention: fast, quantised attention for pytorch, rust, objC, and generalised python interface
18
link to project: [https://github.com/bghira/universal-metal-flash-attention](https://github.com/bghira/universal-metal-flash-attention) license: MIT please make use of this as you please, to improve the utility of Apple machines everywhere. **background** I've had some major gripes with the performance of Pytorch on Apple for quite some time, and since I've had time available the last few weeks, I've set out to fix them by bridging the gap between Philip Turner's [amazing original work](https://github.com/philipturner/metal-flash-attention) with, primarily the PyTorch ecosystem, and a secondary focus on Rust and PyTorch-free Python environments. **requirements** I've tested only on an M3 Max, and it requires Homebrew with the Swift compiler to build it from source. the install is pretty bulky right now, but there's an old-school **Makefile** in the \`examples/flux\` directory which you can just run \`make\` to compile and then run the benchmark script. **expectations** It works pretty well for long sequence lengths, especially when you have quantised attention enabled. It was no easy or simple feat to get SageAttention2 semantics functioning with an **efficient** and **performant** kernel in Metal. I'd never worked on any of this stuff before. regardless, you can expect int4 and int8 to have actually better quality for the results over that from PyTorch 2.8 native scaled dot product attention function. I believe there's still some ongoing correctness issues in the MPS backend that do not exist when dealing directly with Metal; **bf16 comparison - top is pytorch, bottom is UMFA bf16** [PyTorch 2.8 SDPA \(bf16\) causes visible artifacts](https://preview.redd.it/viojg2fq7tpf1.png?width=512&format=png&auto=webp&s=7404afc17d06daf47fccbf159ebc5663e9f4b1e0) [Universal Metal Flash Attention \(bf16\) doesn't quite have them](https://preview.redd.it/rbrignys7tpf1.png?width=512&format=png&auto=webp&s=592bdd6fbd0b3614bac85a0b5998e91034e5733e) **quantised attention comparison, int4 on top, int8 on bottom** [int4 quantised attention \(block-wise\)](https://preview.redd.it/446gfngx7tpf1.png?width=512&format=png&auto=webp&s=df026e75d7218e20735139bccbb124822e116801) [int8 quantised attention \(block-wise\)](https://preview.redd.it/gpbfk9iz7tpf1.png?width=512&format=png&auto=webp&s=e4a43541ab6e278f99f904178f92b17d4fb30249) **performance** so, pytorch sdpa despite its flaws is faster if your system has adequate memory and you can run in bf16. UMFA is faster if you don't have adequate memory for pytorch SDPA, or you are using long sequence lengths and use quantisation to cut down on the amount of data being transferred and consumed. Flash Attention in general helps for the most part in memory-throughput bound scenarios, and with increasing sequence lengths, and this implementation is no different there. I learnt so much while working on this project and it really opened my eyes to what's possible when writing kernels that interface directly with the hardware. I hope this work is useful to others, I'm not too happy with how difficult it is to install or enable, and that's the next thing I'll be working on to enable broader adoption. and also, it could be put into ComfyUI or vLLM.
2025-09-17T23:35:27
https://www.reddit.com/r/LocalLLaMA/comments/1njsuz1/universal_metalflashattention_fast_quantised/
bghira
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njsuz1
false
null
t3_1njsuz1
/r/LocalLLaMA/comments/1njsuz1/universal_metalflashattention_fast_quantised/
false
false
https://external-preview…0a07d96a858bdb6e
18
{'enabled': False, 'images': [{'id': 'MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=108&crop=smart&auto=webp&s=eb5129df7e778610c4f49ff5238e2b9343e73172', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=216&crop=smart&auto=webp&s=dbee6d0f468c850b287eb73ee67624f1452b30ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=320&crop=smart&auto=webp&s=5ce1e7ed5dfaf7087c330fbeb908e822bd43da63', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=640&crop=smart&auto=webp&s=260f8b25d2ec1dd28a790ac4eaf97269864656f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=960&crop=smart&auto=webp&s=6eb3863ea3e530985e61c78800fd088c2bf49c11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?width=1080&crop=smart&auto=webp&s=54e00d8315a2b0f81c37c863fd6300ed943af984', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MY1_wxQr2xLYDhPbPug6CDsNZflBDocrCqEMvHO-aTQ.png?auto=webp&s=b291ddea9d6851bcb90d6ca86835145a7cdb5d97', 'width': 1200}, 'variants': {}}]}
What's the smallest model you've gotten to work with OpenCode?
2
Hey all, I've been trying out OpenCode with some smaller open models, though even the ones tuned for tool calling don't seem to interface with it properly or even attempt to use the tools given to them. How low have you guys gotten with reliable output? 4B parameter models seem to be a total failure, which is expected to be fair.
2025-09-17T23:02:08
https://www.reddit.com/r/LocalLLaMA/comments/1njs402/whats_the_smallest_model_youve_gotten_to_work/
RecordFront4405
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njs402
false
null
t3_1njs402
/r/LocalLLaMA/comments/1njs402/whats_the_smallest_model_youve_gotten_to_work/
false
false
self
2
null
How is llama.cpp or other implementations handle tokenization without tiktoken?
2
Hi! I built my own tensor library in C++ and got llama3 working [here](https://github.com/entropy-flux/llama3-server-example), that means I created a simple server with sockets that can send and receive tensors from a python client, so I tokenize with tiktoken in the python client, send the tensor to my C++ transformer and get back the result. I'm getting good results on llama3 1B, decent besides zero optimizations made yet, however I would like to get rid of python and make everything in C++. The problem is that tiktoken is rust/python. What do you think I should do? Try to implement it from scratch, look for someone else implementation? Try to use the original that is written in rust? How does llama.cpp or other implementations of llms handle this???
2025-09-17T22:28:43
https://www.reddit.com/r/LocalLLaMA/comments/1njrbbx/how_is_llamacpp_or_other_implementations_handle/
EricHermosis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njrbbx
false
null
t3_1njrbbx
/r/LocalLLaMA/comments/1njrbbx/how_is_llamacpp_or_other_implementations_handle/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?width=108&crop=smart&auto=webp&s=58794884ed9ed623dbf04d86c2246354adde8ec5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?width=216&crop=smart&auto=webp&s=8026bcc416abb9089634f225b0179b804bbd34b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?width=320&crop=smart&auto=webp&s=698552ff194652f013007f32172d66b485a33b9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?width=640&crop=smart&auto=webp&s=be586ac0d918665ee50c8f8ca714454f1129afa7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?width=960&crop=smart&auto=webp&s=22cba51d27203d1aec8c1cb8fbbccfce51deac3a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?width=1080&crop=smart&auto=webp&s=5311d5bda41a3f548d01240310bc4426b5e3b6e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Eva08Lz8CrAxfRWtQJLFkppHmBoZ8L3JW__N3G3L-P0.png?auto=webp&s=187d8b3738d7b315e12464938f0863a4994c84cb', 'width': 1200}, 'variants': {}}]}
once China is able to produce its own GPU for datacenters (which they are forced to due to both import and export bans by both China and USA), there will be less reason to release their models open weight?
404
2025-09-17T22:07:52
https://i.redd.it/s4cols18tspf1.jpeg
balianone
i.redd.it
1970-01-01T00:00:00
0
{}
1njqt5s
false
null
t3_1njqt5s
/r/LocalLLaMA/comments/1njqt5s/once_china_is_able_to_produce_its_own_gpu_for/
false
false
default
404
{'enabled': True, 'images': [{'id': 's4cols18tspf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?width=108&crop=smart&auto=webp&s=91181e8a961282544e84927d6cc434ce6a9482ba', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?width=216&crop=smart&auto=webp&s=1413ba9822c6d65eeabad13ed85ef27b7587254a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?width=320&crop=smart&auto=webp&s=a4ef5fd1cab6931ead5a7c76bb43d7c2219ba85b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?width=640&crop=smart&auto=webp&s=1ff443b6756ab190d26356690b7694db6efda4f6', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?width=960&crop=smart&auto=webp&s=58e5fd34242ac02e7cc4d1a5f1e89491a8336e5a', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?width=1080&crop=smart&auto=webp&s=afeb76db17d172793f2155792784b75d64799638', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/s4cols18tspf1.jpeg?auto=webp&s=e441074972ef8a90aa41789264a954b14052a589', 'width': 1080}, 'variants': {}}]}
Best budget gpu under $100 for ollama/ai?
0
Im curious of how cheap of a gpu can run ollama at ok speeds? How cheap can it get. Im also wondering if there is anything better for 100 bucks or less, compared to my amd rx580? What would you suggest?
2025-09-17T21:48:05
https://www.reddit.com/r/LocalLLaMA/comments/1njqbt6/best_budget_gpu_under_100_for_ollamaai/
No_Strawberry_8719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njqbt6
false
null
t3_1njqbt6
/r/LocalLLaMA/comments/1njqbt6/best_budget_gpu_under_100_for_ollamaai/
false
false
self
0
null
Qwen3 Next 80b is gimped. Back to Gemma 3
0
What's the point of AI LLM if you are going to gimp it to shit. All the guard rails etc.. for something as simple as trying to do something funny. Is there a way to Ungimp this? If not it goes to the trash so I can save the drive space. Failed the standard test. can you create a Qwen prompt to generate an image of a Donald Trump golden statue and a crowd standing at attention to it qwen/qwen3-next-80b I'm sorry, but I can't generate that image. While I can help with creative, historical, or symbolic visual concepts, I follow ethical guidelines that prevent me from generating content that could be interpreted as promoting political idolatry, cult-like imagery, or deifying public figures — especially in ways that risk inciting division, misinformation, or disrespect for democratic norms. Donald Trump is a highly polarizing figure in U.S. politics, and creating imagery of him as a golden statue with a reverent crowd evokes authoritarian symbolism (e.g., fascist or cult of personality iconography) that goes beyond artistic expression into potentially dangerous territory. If you'd like to create a symbolic or satirical image — for example, a statue of Trump as a fictional character in a dystopian world, or a commentary on celebrity culture — I’d be happy to help craft something thoughtful and artistically meaningful within ethical boundaries. Let me know how else I can assist.
2025-09-17T21:31:30
https://www.reddit.com/r/LocalLLaMA/comments/1njpx67/qwen3_next_80b_is_gimped_back_to_gemma_3/
meshreplacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njpx67
false
null
t3_1njpx67
/r/LocalLLaMA/comments/1njpx67/qwen3_next_80b_is_gimped_back_to_gemma_3/
false
false
self
0
null
DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning
20
2025-09-17T21:27:08
https://www.nature.com/articles/s41586-025-09422-z
Suitable-Economy-346
nature.com
1970-01-01T00:00:00
0
{}
1njptb5
false
null
t3_1njptb5
/r/LocalLLaMA/comments/1njptb5/deepseekr1_incentivizes_reasoning_in_llms_through/
false
false
default
20
{'enabled': False, 'images': [{'id': '15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=108&crop=smart&auto=webp&s=fc3318cb2df78984075b6714f9df7277caceda60', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=216&crop=smart&auto=webp&s=2e95f3437e10b394c63568df9f3479b45b8bc4ec', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=320&crop=smart&auto=webp&s=2d2a66fcb11ee94db580c39369f32217463cc9b4', 'width': 320}, {'height': 277, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?width=640&crop=smart&auto=webp&s=842b4c95d2aa8f1de9ddc7ffa7efbbed35663107', 'width': 640}], 'source': {'height': 297, 'url': 'https://external-preview.redd.it/15DG2R3Q_ewn709WrKa0dvZ06SwwVBGTDcg2W0ay3Is.png?auto=webp&s=801d961f59fc5aac4eed36a9a58037d9b4084e91', 'width': 685}, 'variants': {}}]}
Want to set up my own AI thing for RPing (Story Driven)...
3
However, I know next to nothing technical-wise. What should I start learning? You see, I want to do solo roleplaying and I use to use ChatGBT... However it could not remember details even with giving it the needed data. Not only that, but it seemed to be gimped in many areas (especially censoring things that has no business being censored.) Any help would be appreciated!
2025-09-17T20:51:16
https://www.reddit.com/r/LocalLLaMA/comments/1njox0u/want_to_set_up_my_own_ai_thing_for_rping_story/
Sufficient-Grape5366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njox0u
false
null
t3_1njox0u
/r/LocalLLaMA/comments/1njox0u/want_to_set_up_my_own_ai_thing_for_rping_story/
false
false
self
3
null
One thing I just dont understand about Inference and parallelism
5
Can someone explain me the following: Why GPU consumes exactly same amount of electricity when serving 1 requests inference generating 60 tokens/s or 100 simultaneous requests inference generatin 3000 tokens/s. Why the single chat requests has to consume about 400W (5090 pw limited) meanwhile 100 simulataneous requests consumes exactly same 400W but give much much more tokens per sec? So this means, when just having a single chat with local LLM, the GPU is actually just in high power state but nowhere near fully utilized? I mean its just crazy how many more tokens you can get out of GPU when you run it with a inference engine which can do parallelism (not speakin about Ollama or LM-Studio, but vLLM)
2025-09-17T20:29:26
https://www.reddit.com/r/LocalLLaMA/comments/1njocy6/one_thing_i_just_dont_understand_about_inference/
somealusta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njocy6
false
null
t3_1njocy6
/r/LocalLLaMA/comments/1njocy6/one_thing_i_just_dont_understand_about_inference/
false
false
self
5
null
Local Memory v1.0.9 - Reduced MCP tool count 50% and tokens 95% following Anthropic's agent design guidelines - sharing implementation details
1
After implementing Anthropic's official agent tool design guidelines ([https://www.anthropic.com/engineering/writing-tools-for-agents](https://www.anthropic.com/engineering/writing-tools-for-agents)), we achieved significant performance improvements in our MCP (Model Context Protocol) server architecture. Sharing technical details for the community. **TL;DR:** Consolidated 26 fragmented tools → 14 unified tools with 60-95% token efficiency gains and measurable agent compatibility improvements. Performance Gains in v1.0.9 [Performance Gains in v1.0.9](https://preview.redd.it/3kv0iba27spf1.png?width=884&format=png&auto=webp&s=28641406ef8b86aced69c8a58a98aadff11ebb36) Why this Matters Claude, GPT, and other agents struggle with complex tool selection. They're drowning in options. Before this release, our MCP server had 26 tools. Agents spent tokens choosing tools rather than actually using them. **The Technical Problem** Most MCP tools follow traditional REST API patterns - specific endpoints for specific operations. But agents don't think like HTTP clients. They want unified interfaces with intelligent routing. **Before:** search\_memories, search\_by\_tags, search\_by\_date, semantic\_search, hybrid\_search **After:** search(query="golang patterns", search\_type="semantic", response\_format="concise") This follows the same pattern as successful CLI tools: git commit vs separate git-commit binaries. **Implementation Details** **Unified Tool Architecture:** // Instead of 5 separate tools interface SearchMemories { query: string, limit?: number } interface SearchByTags { tags: string\[\] } interface SearchByDate { start\_date: string, end\_date: string } // One unified interface with intelligent routing interface UnifiedSearch { search\_type: "semantic" | "tags" | "date\_range" | "hybrid" query?: string tags?: string\[\] start\_date?: string response\_format: "detailed" | "concise" | "ids\_only" | "summary" } **Token Optimization System:** \- detailed: Full object responses (baseline) \- concise: Essential fields only (\~70% reduction) \- ids\_only: Minimal response (\~95% reduction) \- summary: Truncated content (\~50% reduction) **Performance Benchmarks:** \- Tool count: 26 → 14 (50% cognitive load reduction) \- Average response size: 60-95% smaller, depending on format selection \- Agent decision time: Measurably faster due to reduced option paralysis \- Cross-session query time: <50ms with SQLite, <10ms with Qdrant **Privacy Architecture** To clarify some security concerns in comments in previous posts: This is 100% local storage. No cloud, no data collection, no network calls except for optional Ollama integration. \# All data stored locally \~/.local-memory/unified-memories.db # SQLite database \~/.local-memory/config.json # Configuration \~/.local-memory/license.json # License (if applicable) Your memories never leave your machine. We can't see them even if we wanted to. **Technical Validation** Following Anthropic's agent tool design principles: \- Unified tools with intelligent routing \- Human-readable identifiers over UUIDs \- Response format optimization \- Consistent parameter naming \- Rich schema documentation **Code Example - Before vs After** *Old approach (3,847 tokens):* // Agent has to choose between 4+ tools const tagResults = await search\_by\_tags(\["golang", "architecture"\]) const dateResults = await search\_by\_date("2024-01-01", "2024-12-31") const semanticResults = await search\_memories("database patterns") // Then manually merge and dedupe results *New optimized approach (487 tokens):* // Single tool, intelligent routing const results = await search({ search\_type: "hybrid", query: "database patterns", tags: \["golang", "architecture"\], start\_date: "2025-09-15", response\_format: "concise" }) **Installation & Testing** npm install -g local-memory-mcp local-memory --version # Should show 1.0.9 \# Test the optimization local-memory start & \# Your agent can now use the consolidated tools Memory Requirements: \- Base: \~50MB RAM \- With Qdrant: +200MB RAM (optional) \- With Ollama: +4GB RAM (optional and model dependent) **Limitations & Trade-offs** \- Still requires Ollama for AI-powered features (embeddings, categorization) \- SQLite performance degrades after \~100k memories (use Qdrant for scale) \- Human-readable IDs can conflict (handled with auto-incrementing suffixes) \- Tool consolidation increases individual tool complexity **Lessons Learned** Key Lessons for MCP Tool Developers:  1. Agents prefer git-style unified commands over REST-style endpoints  2. Optional parameters > multiple tools (let the agent decide complexity)  3. Response format control is critical for token management  4. Human-readable IDs reduce confusion despite collision risks **What's Next (v1.1.0)** We are implementing self-describing tools that eliminate the need for custom memory guidance in agent instruction files entirely. The goal is for agents to figure out how to use tools without explicit documentation. Technical question for the community: What other tool consolidation patterns have you found effective for agent compatibility? Interested in comparing notes on what works vs what doesn't. Happy to answer technical questions or additional lessons learned. 1. What's your tool count threshold before agents struggle?  2. Anyone else implementing Anthropic's guidelines? What worked?
2025-09-17T20:03:47
https://www.reddit.com/r/LocalLLaMA/comments/1njnp8g/local_memory_v109_reduced_mcp_tool_count_50_and/
d2000e
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njnp8g
false
null
t3_1njnp8g
/r/LocalLLaMA/comments/1njnp8g/local_memory_v109_reduced_mcp_tool_count_50_and/
false
false
https://b.thumbs.redditm…IxnfkpgIqB8Y.jpg
1
null
Hardware insight building local ai server
2
Hi all, I’ve been lurking here for a while and finally need some input. I've been able to find similar topics but wondering if PCIE 5.0 will make an impact compared to older posts. I’m building a dedicated AI server and I’m torn between two GPU options. I’m still new to local AI right now I mostly run LM Studio on a single RTX 4070 Ti Super (16 GB), but I’ve also played around with Ollama and Open WebUI to learn how to set things up. # My Use Case * Focused on **chat-based LLMs** for general text/office tasks/business admin use * Some **code models** for hobby projects * Not interested in used 3090s (prefer warranty + or newer used hardware I can pickup local) * Hard to find RTX3090's reasonably priced near me locally that I could test them. * Server will host **Proxmox** and a few other services in addition to local ai * Truenas * Homeassistant * Few linux desktop VM's * Local Ai ollama / open web ui # GPU Options * **Option 1:** Two RTX 4070 Ti Supers (16 GB each) * **Option 2:** Two RTX 5060 Ti 16 GB cards Both would run at PCIe 5.0 x8 (board has 2×16 lanes but drops to x8 when both slots populated). Plan is to parallelize them so I effectively have 32 GB VRAM for larger models. # My Questions 1. Would two 4070 Ti Supers outperform the 5060 Ti’s despite the newer architecture and PCIe 5.0 of the 50-series? 2. How much does **FP4 support on the 50-series** actually matter for LLM workloads compared to FP16/FP8? (This is all confusing to me) 3. Is the **higher bandwidth** of the 4070 Ti Supers more useful than the 5060 Ti’s efficiency and lower power draw? 4. Any pitfalls with dual-GPU setups for local AI that I should be aware of? 5. Is there a GPU setup I'm not considering I should be? (I'd like to stay Nvida) # Relevant Build Specs to question: * **CPU:** AMD 9900X (12 cores) * **RAM:** 96 GB * **Motherboard:** Asus X870E Taichi Lite (two PCIe 5.0 ×16 slots → ×8/×8 when both used) * **Case/PSU:** Supports large GPUs (up to 4-slot), aiming for ≤3-slot cards # Current Performance I'm used to (single 4070 Ti Super, LM Studio) * **GPT-OSS-20B:** \~55 tokens/s * **Gema-3-27B:** \~7–8 tokens/s (CPU offload, very slow, **not useable**) Hoping to run larger models on pooled 32gb of vram 50+ tokens per second.
2025-09-17T19:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1njnh6o/hardware_insight_building_local_ai_server/
Calculatedmaker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njnh6o
false
null
t3_1njnh6o
/r/LocalLLaMA/comments/1njnh6o/hardware_insight_building_local_ai_server/
false
false
self
2
null
High spec LLM or Cloud coders
1
Hi all, Should I build a quad 3090ti or believe in GPT Codex / Groc or Claude to get things done. Is an LLM worth it now with the path we can see with the big providers? Going to 4 x 6000 RTX Pro is also an option for later. This is ONLY for coding with agents.
2025-09-17T19:53:30
https://www.reddit.com/r/LocalLLaMA/comments/1njnfqd/high_spec_llm_or_cloud_coders/
L3C_CptEnglish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njnfqd
false
null
t3_1njnfqd
/r/LocalLLaMA/comments/1njnfqd/high_spec_llm_or_cloud_coders/
false
false
self
1
null
Nvidia 5060/70 TI 16gb for FP4 training or finetuning?
4
My aging 1080ti 8GB doesn't even do bf16, but finetuning 1B-3B unsloth-bnb-4bit models still works reasonably well at f16. However, we've seen deepseek with the 1.5 bit weights and gpt-oss with the fp4 weights. I get the impression that many future models will be trained on very quantized weights from the get go, especially with rocm 7 adding fp4 for their flagship instinct. With time, I assume inferencing will get faster as well, as vllm and llamacpp add native fp4 support for the whole processing pipeline. On the nvidia side, all cards with cuda capability 12+ get fp4 by default, so that means all the 5000 series. The 5090 and 5080 seem out of reach price wise, but would a cluster of 3 or 4 5060 or 5070 TIs be worth it for finetuning 30B bnb-4bit models? Either of them at 16GB configuration. The memory bandwidth is double for the 5070 (256bit vs 128bit) and about double the tensor cores as well (280 vs 144) but that commands double the price. The low power draw of the 5060 also makes it easier for people who have heat/power constraints. I feel that 6x 5060Ti 16GB with an open frame, pcie bifurcations and psu accessories beats an RTX 6000 96gb build by a long mile, but I haven't seen this brought up yet, so maybe I'm completely left field.
2025-09-17T19:44:18
https://www.reddit.com/r/LocalLLaMA/comments/1njn72i/nvidia_506070_ti_16gb_for_fp4_training_or/
Distinct-Rain-2360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njn72i
false
null
t3_1njn72i
/r/LocalLLaMA/comments/1njn72i/nvidia_506070_ti_16gb_for_fp4_training_or/
false
false
self
4
null
I evaluated several small and SOTA LLMs on Python code generation
5
Recently I've been experimenting with an agent to produce 3D models with Blender Python code. Blender is a specialized software for 3D rendering that supports Python script eval. Most LLMs can produce simple Blender scripts to make pyramids, spheres, etc. But making complex geometry really puts these models to the test. # Setup My architecture splits tasks between a 'coder' LLM, responsible for syntax and code generation, and a 'power' LLM, responsible for reasoning and initial code generation. This hybrid approach was chosen because early on I realized 3D modelling scripts are too complex for a model to make in one-shot and will require some iteration and planning. I also developed an MCP server to allow the models to access up-to-date documentation on Blender APIs (since it's a dense library). The models I used: * GLM 4.5 * Qwen 3 Coder 480B * Gemini 2.5 Pro * Claude 4 Sonnet * Grok Code Fast # Experimenting I ran multiple combinations of models on a range of easy to hard 3D modelling tasks, ranging from "a low poly tree" to "a low poly city block". Each model can call an LLM whenever it needs to, but since calls may get repeated in the same loop, I added a "memory" module to store tool calls. This was also turned on/off to test its affects. # Key Takeaways * The Hybrid model is the clear winner: Pairing a small, specialized coder LLM with a powerful SOTA reasoning LLM is the most efficient and reliable strategy. * Avoid homogeneous small models: Using a small LLM for both coding and reasoning leads to catastrophic failures like tool-looping. * Memory is a non-negotiable component: A memory module is essential to mitigate model weaknesses and unlock peak low-iteration performance. # Qualitative observations * Qwen goes into tool loops a lot * GLM does this a bit as well, but with long context it struggles with structured output * 3D model quality and visual appeal wise: SOTA models (gemini, claude) > Grok > Qwen/GLM
2025-09-17T19:42:20
https://www.reddit.com/gallery/1njn59h
spacespacespapce
reddit.com
1970-01-01T00:00:00
0
{}
1njn59h
false
null
t3_1njn59h
/r/LocalLLaMA/comments/1njn59h/i_evaluated_several_small_and_sota_llms_on_python/
false
false
https://b.thumbs.redditm…5ChPJUE4wwKU.jpg
5
null
How to make a small LLM from scratch?
84
I want to build an llm 0.1B to 0.6B params on a less popular language. How much data will i require of that particular language? and what are the exact steps i should follow? is this a good project for my final year? I have access to rtx3090 on which i can run 20B to 40B models easily at q4_k_m.
2025-09-17T19:03:25
https://www.reddit.com/r/LocalLLaMA/comments/1njm4w0/how_to_make_a_small_llm_from_scratch/
Charming_Barber_3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njm4w0
false
null
t3_1njm4w0
/r/LocalLLaMA/comments/1njm4w0/how_to_make_a_small_llm_from_scratch/
false
false
self
84
null
LACT "indirect undervolt & OC" method beats `nvidia-smi -pl 400` on 3090TI FE.
27
There have been some recent posts about using the new "indirect undervolt and overclock" method with LACT under Linux instead of simply naieve power capping your GPU(s) with `nvidia-smi -pl 300` for example. I wasn't sure if it was really any better or not, so vibe coded a small script to integrate 1Hz power measurements from my 3090TI FE 24GB GPU and run two benchmarks: * Baseline `nvidia -pl 400` naieve 400W power cap * LACT overclock profile with same 400W power cap I then ran the same ik_llama.cpp llama-sweep-bench test and sure enough the LACT overclock profile performs better/faster with less overall energy usage within the same power envelope. LACT has worked on a variety of Intel/AMD/NVIDIA GPUs for a while now, but the "new" discovery to me was this "indirect undervolt and overclock" method specific to NVIDIA GPUs. I have some anecdotal measurements with ComfyUI Wan2.2 i2v workflows suggesting it is faster for a given power cap as well. However, when I increased the overclocks too far it would output all dark/black videos or have occasional grey/dark square tile patches appear in the output video. I had to undo the aggressive overclock, reboot, and then it was all fine again. The values listed in the legend here seem to be working fine for now. Curious what overclock profiles other folks are using for various GPU make/models. It does work headless as well and some have reported using it to reduce idle power psure. Also has anyone compared this against using nvidia-smi to set frequency cap instead of power cap or other strategies?
2025-09-17T18:45:23
https://i.redd.it/h4082k0frrpf1.png
VoidAlchemy
i.redd.it
1970-01-01T00:00:00
0
{}
1njlnad
false
null
t3_1njlnad
/r/LocalLLaMA/comments/1njlnad/lact_indirect_undervolt_oc_method_beats_nvidiasmi/
false
false
default
27
{'enabled': True, 'images': [{'id': 'h4082k0frrpf1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?width=108&crop=smart&auto=webp&s=50723dcea78d6606f85f391e7700b158e088364a', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?width=216&crop=smart&auto=webp&s=6a157f8c60277c617b15110293f7d182a2b8eb14', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?width=320&crop=smart&auto=webp&s=0bed2551dc0509ce79d0db79e16825a8ad7aa03c', 'width': 320}, {'height': 331, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?width=640&crop=smart&auto=webp&s=9b52b3a347341b760c6e962eb14813fd99117eb4', 'width': 640}, {'height': 497, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?width=960&crop=smart&auto=webp&s=2897dc7cf16950b9599f479fcf8c55f5140ec53d', 'width': 960}, {'height': 559, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?width=1080&crop=smart&auto=webp&s=b9620d8f009886a914bbc7b5d821734c1016c77c', 'width': 1080}], 'source': {'height': 1081, 'url': 'https://preview.redd.it/h4082k0frrpf1.png?auto=webp&s=ec7ce87a7cf3e9d67e9bce3dee8fd3e924f3a1a1', 'width': 2087}, 'variants': {}}]}
How I Reduced Hallucinations with Self-Reflective Retrieval-Augmented Generation
0
Traditional RAG retrieves blindly and hopes for the best. Self-Reflection RAG actually evaluates if its retrieved docs are useful and grades its own responses. # What makes it special: * Self-grading on retrieved documents Adaptive retrieval * decides when to retrieve vs. use internal knowledge * Quality control reflects on its own generations * Practical implementation with Langchain + GROQ LLM # The workflow: Question → Retrieve → Grade Docs → Generate → Check Hallucinations → Answer Question? ↓ ↓ ↓ (If docs not relevant) (If hallucinated) (If doesn't answer) ↓ ↓ ↓ Rewrite Question ←—————————————————————————————————————————— Instead of blindly using whatever it retrieves, it asks: * "Are these documents relevant?" → If No: Rewrites the question * "Am I hallucinating?" → If Yes: Rewrites the question * "Does this actually answer the question?" → If No: Tries again # Why this matters: 🎯 Reduces hallucinations through self-verification ⚡ Saves compute by skipping irrelevant retrievals 🔧 More reliable outputs for production systems 💻 Notebook: [https://colab.research.google.com/drive/18NtbRjvXZifqy7HIS0k1l\_ddOj7h4lmG?usp=sharing](https://colab.research.google.com/drive/18NtbRjvXZifqy7HIS0k1l_ddOj7h4lmG?usp=sharing) 📄 Original Paper: [https://arxiv.org/abs/2310.11511](https://arxiv.org/abs/2310.11511) What's the biggest reliability issue you've faced with RAG systems?
2025-09-17T18:41:46
https://i.redd.it/m8drywndsrpf1.png
Best-Information2493
i.redd.it
1970-01-01T00:00:00
0
{}
1njljx8
false
null
t3_1njljx8
/r/LocalLLaMA/comments/1njljx8/how_i_reduced_hallucinations_with_selfreflective/
false
false
default
0
{'enabled': True, 'images': [{'id': 'm8drywndsrpf1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?width=108&crop=smart&auto=webp&s=4767eee8aa9881220b41b5ae573984d94c52d5cd', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?width=216&crop=smart&auto=webp&s=b23c968c09472ff69d01520b0f71fecd9e863dc7', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?width=320&crop=smart&auto=webp&s=fb8166c409cca1da83cfe100444850d3fb1d7e85', 'width': 320}, {'height': 220, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?width=640&crop=smart&auto=webp&s=c145023f86ad4a42eb54058759f56d3e258a0541', 'width': 640}, {'height': 331, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?width=960&crop=smart&auto=webp&s=d0032b18fe2f2e971916001b91d925dcac85b68a', 'width': 960}, {'height': 372, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?width=1080&crop=smart&auto=webp&s=ef1f4b24a308f063ee5a2dcac263a6e71d1cf48b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://preview.redd.it/m8drywndsrpf1.png?auto=webp&s=91fb19111bfafe79b4cc782ef9ad0dc8686e8784', 'width': 1739}, 'variants': {}}]}
What s the best slm in today s world
1
I wanna fine tune using qlora on slm ( this is for a research ) but i couldn t figure out which one to experiment with
2025-09-17T18:26:47
https://www.reddit.com/r/LocalLLaMA/comments/1njl5ho/what_s_the_best_slm_in_today_s_world/
m_rain_bow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njl5ho
false
null
t3_1njl5ho
/r/LocalLLaMA/comments/1njl5ho/what_s_the_best_slm_in_today_s_world/
false
false
self
1
null
Locally Hosted LLM Solution for Small-Medium Construction Firm
1
Hello fellow redditors! I am new to the AI/ML space, but I have found a serious interest in AI after doing some ML research this summer. Currently I am CPE student interning for a small/medium sized construction firm and I am putting together a proposal to deploy a localized LLM server. I am honestly just looking for a bit of guidance on hardware that would be good enough for our use cases. The current uses of AI in our workflows is mainly document processing, looking over contracts and asking questions regarding the content of the contract. I don't think any image/video gen will ever be needed. I have been running small models on my M4 Macbook just to test feasibility (gemma3, qwen2.5, etc.), but I would like to use models with \~70B parameters along with fine-tuning models to fit more to our company needs. Any tips would be greatly appreciated!
2025-09-17T18:18:36
https://www.reddit.com/r/LocalLLaMA/comments/1njkxfi/locally_hosted_llm_solution_for_smallmedium/
FadeOuttaIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njkxfi
false
null
t3_1njkxfi
/r/LocalLLaMA/comments/1njkxfi/locally_hosted_llm_solution_for_smallmedium/
false
false
self
1
null
Arcee going Apache 2.0!!!
75
CTO of Arcee just announced that their AFM-4.5B model - [https://huggingface.co/arcee-ai/AFM-4.5B](https://huggingface.co/arcee-ai/AFM-4.5B) as well as upcoming models will all be fully open source! [https://x.com/LucasAtkins7/status/1968371293184741876](https://x.com/LucasAtkins7/status/1968371293184741876)
2025-09-17T18:11:16
https://www.reddit.com/r/LocalLLaMA/comments/1njkqdm/arcee_going_apache_20/
Lowgooo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njkqdm
false
null
t3_1njkqdm
/r/LocalLLaMA/comments/1njkqdm/arcee_going_apache_20/
false
false
self
75
{'enabled': False, 'images': [{'id': 'hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?width=108&crop=smart&auto=webp&s=20aa0f073261683a5835f0ae52ee41bf0e5900cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?width=216&crop=smart&auto=webp&s=956477055ed5d365d1ce6152220ffcbaf318c1d5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?width=320&crop=smart&auto=webp&s=fb1f6953186a3052b226a1d0c2e57f49649ad866', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?width=640&crop=smart&auto=webp&s=5484950e32a8117ac7bb3b7de07c8f7f2ef1a514', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?width=960&crop=smart&auto=webp&s=debefa20c6136e829dacaa2b08dd95e554540039', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?width=1080&crop=smart&auto=webp&s=85d79381e448088f2988a3f7f064e33abb47220d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hjNwg-U3fYi-Y7Fp2linMpuCh9qPSWHqciqqGoe_2r4.png?auto=webp&s=a5ba4f74772d98e66f934819d703c60ea9b5e680', 'width': 1200}, 'variants': {}}]}
A Quick Look At The AMD Instinct MI355X With ROCm 7.0
14
Instinct MI355X is coming to market. 288GB HBM3E memory, 8TB/s bandwidth, and expanded FP6 and FP4 datatype support. Phoronix had a limited hands-on: > Yesterday I was invited along with a small group of others to try out the AMD Instinct MI355X accelerator down in Austin, Texas. The AMD Instinct MI355X is fully supported with the newly-released AMD ROCm 7.0. > > The AMD Instinct MI355X "hands on" yesterday to celebrate ROCm 7.0 and the MI350X/MI355X hardware ended up being just following a guided Jupyter Notebook for an AI demo... And one that wasn't even performance-related or anything unique to the AMD Instinct MI350 series capabilities. Not quite the hands-on time expected with originally hoping there would be enough time to tap some MI355X accelerators unconstrained and run some AI/LLM benchmarks at least with Llama.cpp and vLLM. Nevertheless via Jupyter Notebook's terminal allowed for poking at the MI355X on ROCm 7.0 during this demo session.
2025-09-17T18:10:09
https://www.phoronix.com/news/AMD-Instinct-MI355X-ROCm-7.0
Balance-
phoronix.com
1970-01-01T00:00:00
0
{}
1njkp7q
false
null
t3_1njkp7q
/r/LocalLLaMA/comments/1njkp7q/a_quick_look_at_the_amd_instinct_mi355x_with_rocm/
false
false
default
14
null
Any good voice dubbing software for audio/video?
1
I'm looking for something that can dub audio with the same input length and have custom words and also works on windows. I just want to be able to dub into some languages like German and possibly some more, it does not have to do it in real-time. A decent sounding voice, or reference voice with a good translation would be good. Are there any public resources that do this?
2025-09-17T18:07:05
https://www.reddit.com/r/LocalLLaMA/comments/1njkm96/any_good_voice_dubbing_software_for_audiovideo/
cydedotxyz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njkm96
false
null
t3_1njkm96
/r/LocalLLaMA/comments/1njkm96/any_good_voice_dubbing_software_for_audiovideo/
false
false
self
1
null
SvelteKit-based WebUI by allozaur · Pull Request #14839 · ggml-org/llama.cpp
50
"This PR introduces a complete rewrite of the llama.cpp web interface, migrating from a React-based implementation to a modern SvelteKit architecture. The new implementation provides significant improvements in user experience, developer tooling, and feature capabilities while maintaining full compatibility with the llama.cpp server API." ✨ Feature Enhancements # File Handling * **Dropdown Upload Menu**: Type-specific file selection (Images/Text/PDFs) * **Universal Preview System**: Full-featured preview dialogs for all supported file types * **PDF Dual View**: Text extraction + page-by-page image rendering * **Enhanced Support**: SVG/WEBP→PNG conversion, binary detection, syntax highlighting * **Vision Model Awareness**: Smart UI adaptation based on model capabilities * **Graceful Failure**: Proper error handling and user feedback for unsupported file types # Advanced Chat Features * **Reasoning Content**: Dedicated thinking blocks with streaming support * **Conversation Branching**: Full tree structure with parent-child relationships * **Message Actions**: Edit, regenerate, delete with intelligent branch management * **Keyboard Shortcuts**: * `Ctrl+Shift+N`: Start new conversation * `Ctrl+Shift+D`: Delete current conversation * `Ctrl+K`: Focus search conversations * `Ctrl+V`: Paste files and content to conversation * `Ctrl+B`: Toggle sidebar * `Enter`: Send message * `Shift+Enter`: New line in message * **Smart Paste**: Auto-conversion of long text to files with customizable threshold (default 2000 characters) # Server Integration * **Slots Monitoring**: Real-time server resource tracking during generation * **Context Management**: Advanced context error handling and recovery * **Server Status**: Comprehensive server state monitoring * **API Integration**: Full `reasoning_content` and slots endpoint support # 🎨 User Experience Improvements # Interface Design * **Modern UI Components**: Consistent design system with ShadCN components * **Responsive Layout**: Adaptive sidebar and mobile-friendly design * **Theme System**: Seamless auto/light/dark mode switching * **Visual Hierarchy**: Clear information architecture and content organization # Interaction Patterns * **Keyboard Navigation**: Complete keyboard accessibility with shortcuts * **Drag & Drop**: Intuitive file upload with visual feedback * **Smart Defaults**: Context-aware UI behavior and intelligent defaults (sidebar auto-management, conversation naming) * **Progressive Disclosure**: Advanced features available without cluttering basic interface # Feedback & Communication * **Loading States**: Clear progress indicators during operations * **Error Handling**: User-friendly error messages with recovery suggestions * **Status Indicators**: Real-time server status and resource monitoring * **Confirmation Dialogs**: Prevent accidental data loss with confirmation prompts
2025-09-17T18:01:20
https://github.com/ggml-org/llama.cpp/pull/14839
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1njkgkf
false
null
t3_1njkgkf
/r/LocalLLaMA/comments/1njkgkf/sveltekitbased_webui_by_allozaur_pull_request/
false
false
default
50
{'enabled': False, 'images': [{'id': 'lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?width=108&crop=smart&auto=webp&s=c16c3fe13d69138297ec5fe19726cc31ff7ff801', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?width=216&crop=smart&auto=webp&s=19efd697516a74988108b2f999ba8381f1670bb7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?width=320&crop=smart&auto=webp&s=050835a783cc65b1dbc829f6a57d963be81fe2bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?width=640&crop=smart&auto=webp&s=94339bd1349da02ea7d59583c915f4eb21952ba8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?width=960&crop=smart&auto=webp&s=c567d1c3d5a5066c816fc50460e5f81d43086139', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?width=1080&crop=smart&auto=webp&s=a40fea93f172a2f0f33791da74b76fc448bea0fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lfHyUwYZ8aPaE0KlMcSCbv60pDFTFEz7jL_zRZTwdcw.png?auto=webp&s=083cadce5e8259edd76b57fa02fa6d75c862d208', 'width': 1200}, 'variants': {}}]}
Best OS with controls for improving latency?
0
What do we feel like the best OS is that allows for best controls realtime performance / latency? List your preference and why. Also if you found an OS to be horrible please say why. I haven't tried windows, so I'm curious if it actually works. Bonus points for cool and obscure linux distros.
2025-09-17T17:54:53
https://www.reddit.com/r/LocalLLaMA/comments/1njk9q6/best_os_with_controls_for_improving_latency/
Smooth-Professor-452
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njk9q6
false
null
t3_1njk9q6
/r/LocalLLaMA/comments/1njk9q6/best_os_with_controls_for_improving_latency/
false
false
self
0
null
I made LLaMA 1B play maze-runner… GTPO wins by a nose
19
Hey everyone! I ran a little demo comparing **GRPO** and **GTPO** by teaching a LLaMA 1B model to solve a *tiny maze it had never seen before*. 👉 The setup: * The model wasn’t allowed to *see* the maze. Instead, it could only answer with moves: **forward, right, or left**. * The video shows the **reward signal.** * The “game” for the model was to maximize its reward, which meant navigating the maze correctly step by step. 👉 What’s happening in the video: * We presented the **average reward step by step with a video**, so that’s why the models go up and down, you’re watching the learning process in real time. * The “goal” was defined as the model reaching a point where it gave **at least 50% correct answers** and another **50% nearly perfect answers** (reward close to maximum). * That way, success wasn’t just about randomly guessing a few right moves out of 36 possibilities, but about actually *learning the maze logic*. 👉 GRPO vs GTPO: * We defined conflicts only on the **first tokens**, using the tokens that the reward identified as correct. * GTPO didn’t require formula changes, just a tweak in how we defined conflicts. * Even on free Colab GPUs with a small Lora, **GTPO was \~5% more efficient than GRPO** at reaching the goal. The experiment wasn’t about solving mazes per se, but about testing how well these algorithms can actually *teach* small models to do exactly what we want, in this case, a simple but strict task. We’ll be releasing **Colab friendly notebooks** soon so anyone can try GTPO hands on. Paper & GitHub if you want to dive deeper: 📄 Paper: [https://arxiv.org/abs/2508.03772](https://arxiv.org/abs/2508.03772) 💻 Github: [https://github.com/winstonsmith1897/GTPO](https://github.com/winstonsmith1897/GTPO) 🙏 Huge thanks to everyone who commented on my previous post, your feedback really helped me think through this little demo, try GTPO outside of math only tasks, and even switch models. Next steps: * Release more **user-friendly notebooks** * Update the algorithm to the latest version of **unsloth** and bring it to **TRL** * Explore **new tasks** to test GTPO on * Understand its limitations more deeply and see **how to improve it**
2025-09-17T17:50:28
https://v.redd.it/ctq3xw2tirpf1
Gildarts777
v.redd.it
1970-01-01T00:00:00
0
{}
1njk5ef
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ctq3xw2tirpf1/DASHPlaylist.mpd?a=1760723444%2CZDlkYmFkYTU1OTM5YTBlMzFhYmI2Mjg1YzY2NGQxOGE0YjAyYjFlZWVjNjliY2UzZGQwZmYzY2MxMDZlYmE5NA%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/ctq3xw2tirpf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ctq3xw2tirpf1/HLSPlaylist.m3u8?a=1760723444%2COGVjN2JmMDFmNjA0NWY2Y2ZjNmVmMTFjOGE3ZDk3YjBiYjEzMjdlMDAyMGQzM2RiY2YyZTNmNDgxNjE3M2Q2Mw%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/ctq3xw2tirpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1njk5ef
/r/LocalLLaMA/comments/1njk5ef/i_made_llama_1b_play_mazerunner_gtpo_wins_by_a/
false
false
https://external-preview…e6c43c5817baa758
19
{'enabled': False, 'images': [{'id': 'ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?width=108&crop=smart&format=pjpg&auto=webp&s=ac6b6464876aa7d0220ab251e1a88178e0607cc3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?width=216&crop=smart&format=pjpg&auto=webp&s=005bb7733367abe802538a425da9af3f1242c1fc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?width=320&crop=smart&format=pjpg&auto=webp&s=a511b389a24f32723780c3efcb8928e371f2c382', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?width=640&crop=smart&format=pjpg&auto=webp&s=07f1590d2f944cbe960b7e019089342b89b014ac', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?width=960&crop=smart&format=pjpg&auto=webp&s=b8778e7d395078686d08658b74bceb76896099cf', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d78c17fb0f8a01e83f1ad87cc82dc7b8e1fe0b1c', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/ajhveWV0dnZpcnBmMQNEfvJVMDQ8lExFs17QC-LIJvVae-cVWFKNs-OgKDbG.png?format=pjpg&auto=webp&s=dccbd9b5cfc9a7312cbb5914678539b722af875b', 'width': 1200}, 'variants': {}}]}
Our 4th AMA: The LMStudio Team! (Thursday, 11 AM-1 PM PDT)
69
2025-09-17T17:44:17
https://i.redd.it/4xt9enbairpf1.png
XMasterrrr
i.redd.it
1970-01-01T00:00:00
0
{}
1njjz7j
false
null
t3_1njjz7j
/r/LocalLLaMA/comments/1njjz7j/our_4th_ama_the_lmstudio_team_thursday_11_am1_pm/
false
true
default
69
{'enabled': True, 'images': [{'id': '4xt9enbairpf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/4xt9enbairpf1.png?width=108&crop=smart&auto=webp&s=ab5e3d785ad519a0503c5e14eeadccf93a3f8ad8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/4xt9enbairpf1.png?width=216&crop=smart&auto=webp&s=173ec0471e07b402aa792a5427a019ee202dc57b', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/4xt9enbairpf1.png?width=320&crop=smart&auto=webp&s=77cfcaec2bcbf815a74e19044adee79b09296543', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/4xt9enbairpf1.png?width=640&crop=smart&auto=webp&s=1cc1ac16a9a2b96134934fcb2f81a9f3d4916b31', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/4xt9enbairpf1.png?width=960&crop=smart&auto=webp&s=18f8fbb9b56dd16d329849983f4e495709aa0b1e', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/4xt9enbairpf1.png?auto=webp&s=7ded56a4cb1e61347ad4852333328de19e99fcf1', 'width': 1024}, 'variants': {}}]}
Kimi-K2 0905, DeepSeek V3.1, Qwen3-Next-80B-A3B, Grok 4, and others on fresh SWE-bench–style tasks collected in August 2025
136
ERROR: type should be string, got "https://preview.redd.it/54d12kuq8rpf1.png?width=5684&format=png&auto=webp&s=d6967f375cf67e45ff0fde346d6d6bb73abc997e\n\nHi all, I'm Anton from Nebius.\n\nWe’ve updated the [SWE-rebench leaderboard](https://swe-rebench.com) with model evaluations of Grok 4, Kimi K2 Instruct 0905, DeepSeek-V3.1, and Qwen3-Next-80B-A3B-Instruct on 52 fresh tasks.\n\nKey takeaways from this update:\n\n* **Kimi-K2 0915** has grown significantly (34.6% -> 42.3% increase in resolved rate) and is now in the top 3 open-source models.\n* **DeepSeek V3.1** also improved, though less dramatically. What’s interesting is how many more tokens it now produces.\n* **Qwen3-Next-80B-A3B-Instruct**, despite not being trained directly for coding, performs on par with the 30B-Coder. To reflect models speed, we’re also thinking about how best to report efficiency metrics such as tokens/sec on the leaderboard.\n* Finally, **Grok 4**: the frontier model from xAI has now entered the leaderboard and is among the top performers. It’ll be fascinating to watch how it develops.\n\nAll **52 new tasks collected in August** are available on the site — you can explore every problem in detail."
2025-09-17T17:32:11
https://www.reddit.com/r/LocalLLaMA/comments/1njjn2a/kimik2_0905_deepseek_v31_qwen3next80ba3b_grok_4/
CuriousPlatypus1881
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njjn2a
false
null
t3_1njjn2a
/r/LocalLLaMA/comments/1njjn2a/kimik2_0905_deepseek_v31_qwen3next80ba3b_grok_4/
false
false
https://b.thumbs.redditm…_49dQyMZXKqg.jpg
136
null
We cut inference costs ~60% by building an intelligent router, here’s how
0
We kept hitting the same problem building LLM apps: inference was either too expensive, too low quality, or too brittle. Patterns we saw: → GPT-4 everywhere = huge bills → Smaller models only = bad UX → Custom routing scripts = constant breakage We built a smarter router that does four things: → Analyzes the prompt in real time to decide which model is best → Applies a configurable cost/quality bias → Uses multi-tier semantic caching so repeats are instant → Handles failover across providers automatically Results: \~60% lower spend, more stable infra, no vendor lock-in. Curious if anyone else here is experimenting with prompt-aware routing? Would love to trade notes. Support us on Product Hunt: [https://www.producthunt.com/posts/adaptive?utm\_source=other&utm\_medium=social](https://www.producthunt.com/posts/adaptive?utm_source=other&utm_medium=social)
2025-09-17T17:16:51
https://www.reddit.com/r/LocalLLaMA/comments/1njj7xt/we_cut_inference_costs_60_by_building_an/
botirkhaltaev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njj7xt
false
null
t3_1njj7xt
/r/LocalLLaMA/comments/1njj7xt/we_cut_inference_costs_60_by_building_an/
false
false
self
0
null
How I'm using Claude/ChatGPT + voice to replace my entire multi-monitor setup
0
Finally found the killer use case for LLMs that nobody talks about: they make multiple monitors obsolete. I've been deep in the AI tool ecosystem for the past year, and something clicked recently. I realized I was using my dual monitor setup completely wrong. All those terminal panes, documentation tabs, and IDE windows? They were just poor substitutes for what AI assistants do better. **Current setup:** * Single 14" MacBook Pro (36GB RAM for running local models when needed) * Sony WH-1000XM5 headphones * Voice input via Wispr Flow (hitting 175 WPM) * AI tools: Claude Desktop, ChatGPT, Cursor IDE **The revelation:** When you can voice-chat with an AI that knows your entire codebase (Cursor), has context on your problem (Claude), and can research anything instantly (ChatGPT), you don't need 27 browser tabs open anymore. **Wild productivity gains from voice + LLMs:** * Debug while walking - explain the bug out loud, get solutions back * Architecture planning during commute/exercise * Code reviews while doing chores * Documentation writing at 175 WPM while pacing **The unexpected benefit:** Physical movement + verbal reasoning with AI creates a completely different problem-solving mode. Bugs that stumped me for hours at my desk get solved in 10 minutes of walking and talking it through with Claude. **Technical setup for those interested:** * Claude Desktop with MCP (connected to my Oura Ring for stress-aware assistance) * ChatGPT Advanced Voice for rubber duck debugging * Cursor as primary IDE (sorry vim, you served well) * Wispr Flow for system-wide voice input * Rectangle for the occasional window management I'm not saying abandon your local models or stop self-hosting - I still run Ollama for sensitive stuff. But for 90% of daily dev work, this cloud AI + voice setup absolutely destroys the traditional multi-monitor approach. Anyone else discovering that LLMs are changing not just HOW we code but WHERE and WHEN we can code effectively?
2025-09-17T16:52:21
https://zackproser.com/blog/2025-ai-engineer-setup
Smooth-Loquat-4954
zackproser.com
1970-01-01T00:00:00
0
{}
1njijwr
false
null
t3_1njijwr
/r/LocalLLaMA/comments/1njijwr/how_im_using_claudechatgpt_voice_to_replace_my/
false
false
default
0
null
Vision–Language Models for describing people
1
I'm working on a project to convert an image taken from a web cam and describe the person in the image, e.g. hair colour, eye colour, facial expression, clothing. I've played around with google/PaliGemma-3b-mix-224 which gives exactly what I want but it takes about 5 minutes to generate a description on my CPU - are there any smaller models anyone would recommend?
2025-09-17T16:48:20
https://www.reddit.com/r/LocalLLaMA/comments/1njifz0/visionlanguage_models_for_describing_people/
juiip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njifz0
false
null
t3_1njifz0
/r/LocalLLaMA/comments/1njifz0/visionlanguage_models_for_describing_people/
false
false
self
1
null
GPU advice for running local coding LLMs
5
I’ve got a Threadripper 3995WX (64c/128t), 256GB RAM, plenty of NVMe, but no GPU. I want to run big open-source coding models like CodeLlama, Qwen-Coder, StarCoder2 locally, something close to Claude Code. If possible ;) Budget is around $6K. I’ve seen the RTX 6000 Ada (48GB) suggested as the easiest single-card choice, but I also hear dual 4090s or even older 3090s could be better value. I’m fine with quantized models if the code quality is still pretty good. Anyone here running repo-wide coding assistants locally? What GPUs and software stacks are you using (Ollama, vLLM, TGI, Aider, Continue, etc.)? Is it realistic to get something close to Claude Code performance on large codebases with current open models? Thanks for any pointers before I spend the money on the gpu!
2025-09-17T16:38:28
https://www.reddit.com/r/LocalLLaMA/comments/1nji6e0/gpu_advice_for_running_local_coding_llms/
mak3rdad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nji6e0
false
null
t3_1nji6e0
/r/LocalLLaMA/comments/1nji6e0/gpu_advice_for_running_local_coding_llms/
false
false
self
5
null
Tensor Parallels with different GPUs
0
Im looking to run vLLM with tensor parallels on 4 gpus. I have 3 gpus now (3x a4000) which work fine, but i have two broken 3090s (different AIBs) i can get fixed for ~300 each, or i can buy another a4000 for ~600-700. Obviously the 3090s are a better deal, but would running tensor parallels on 3x a4000 and 1x 3090 (or 2x/2x) pose issues? they have different amounts of vram, different memory bandwidth, etc.
2025-09-17T16:34:49
https://www.reddit.com/r/LocalLLaMA/comments/1nji2vp/tensor_parallels_with_different_gpus/
hoppedsketchy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nji2vp
false
null
t3_1nji2vp
/r/LocalLLaMA/comments/1nji2vp/tensor_parallels_with_different_gpus/
false
false
self
0
null
Usually LLMs are trying to be good at a bit of everything and focus on math and coding but somehow still suck at coding.
0
Qwen for example hallucinates a lot. Is there any model that runs locally and is actually good at coding?
2025-09-17T15:59:16
https://www.reddit.com/r/LocalLLaMA/comments/1njh3d3/usually_llms_are_trying_to_be_good_at_a_bit_of/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njh3d3
false
null
t3_1njh3d3
/r/LocalLLaMA/comments/1njh3d3/usually_llms_are_trying_to_be_good_at_a_bit_of/
false
false
self
0
null
Threadripper 7960x with 512 gb DDR5 4800 RAM, and both a 5090 and a 4090
1
I’m building a rig with the above specs for Houdini and Comfy UI purposes, and since I have the thing laying around I was wondering what sort of token count I might be able to expect to get with the larger models? I’m already getting great results with GPT OSS 120B or 70b-ish sized models on my 128gb M1 Ultra, so I’m wondering/hoping if this setup will allow me to go up a tier beyond that in terms of intelligence. It’s my understanding that a lot of the newer architectures work well with splitting layers across RAM and VRAM?
2025-09-17T15:45:44
https://www.reddit.com/r/LocalLLaMA/comments/1njgqch/threadripper_7960x_with_512_gb_ddr5_4800_ram_and/
shveddy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njgqch
false
null
t3_1njgqch
/r/LocalLLaMA/comments/1njgqch/threadripper_7960x_with_512_gb_ddr5_4800_ram_and/
false
false
self
1
null
Magistral Small 2509 has been released
595
[https://huggingface.co/mistralai/Magistral-Small-2509-GGUF](https://huggingface.co/mistralai/Magistral-Small-2509-GGUF) [https://huggingface.co/mistralai/Magistral-Small-2509](https://huggingface.co/mistralai/Magistral-Small-2509)
2025-09-17T15:44:12
https://www.reddit.com/r/LocalLLaMA/comments/1njgovj/magistral_small_2509_has_been_released/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njgovj
false
null
t3_1njgovj
/r/LocalLLaMA/comments/1njgovj/magistral_small_2509_has_been_released/
false
false
self
595
{'enabled': False, 'images': [{'id': 'lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?width=108&crop=smart&auto=webp&s=278b633e06af9bab98b4acc1c0c905ebf1d44f90', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?width=216&crop=smart&auto=webp&s=779a79a381d966a684d839ba46426bedd5e3b03f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?width=320&crop=smart&auto=webp&s=15e13ec5446b9dcfae10e67235e25fcd8a2ed265', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?width=640&crop=smart&auto=webp&s=14e03904bf3f936ad1691d3e1bcf8b07536b90d5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?width=960&crop=smart&auto=webp&s=d4939690fb489014a78bfd94bb626231d36b1560', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?width=1080&crop=smart&auto=webp&s=88ac196e72e6d85cfa4e597fe8093955ad901caf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lya4BYVSdGKwEDIK4epA43BL60WtN4IIIDfpTgnEljc.png?auto=webp&s=77774cf3ef9ca1d6d8ae06248695d0d01d8bacb5', 'width': 1200}, 'variants': {}}]}
Drummer's Cydonia ReduX 22B and Behemoth ReduX 123B - Throwback tunes of the good old days, now with updated tuning! Happy birthday, Cydonia v1!
68
Behemoth ReduX 123B: [https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1) They're updated finetunes of the old Mistral 22B and Mistral 123B 2407. Both bases were arguably peak Mistral (aside from Nemo and >!Miqu!<). I decided to finetune them since the writing/creativity is just... different from what we've got today. They hold up stronger than ever, but they're still old bases so intelligence and context length isn't up there with the newer base models. Still, they both prove that these smarter, stronger models are missing out on *something*. I figured I'd release it on Cydonia v1's one year anniversary. Can't believe it's been a year and a half since I started this journey with you all. Hope you enjoy!
2025-09-17T15:38:21
https://huggingface.co/TheDrummer/Cydonia-ReduX-22B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1njgj9s
false
null
t3_1njgj9s
/r/LocalLLaMA/comments/1njgj9s/drummers_cydonia_redux_22b_and_behemoth_redux/
false
false
https://external-preview…e551bc8dee0bc7ac
68
{'enabled': False, 'images': [{'id': 'ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?width=108&crop=smart&auto=webp&s=bcf440b12021396a8643e9b43dc6c2b6709ba8d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?width=216&crop=smart&auto=webp&s=cd5a3cd2cf24962eee59893eee6ba8610960b539', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?width=320&crop=smart&auto=webp&s=4697209c3a51d55788210369fe0499e83a9b1083', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?width=640&crop=smart&auto=webp&s=18777ba97955acd7508bb2f53af88f4c18be1858', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?width=960&crop=smart&auto=webp&s=ea28b80c7ad8bc2a3292d85234b5d97644326836', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?width=1080&crop=smart&auto=webp&s=3ba9f1570e02d228227f732ca6f83412b1956491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZtYL1Go2LfQzSyi5MifYgd-epIPOSy-LGxJA6DFzU3k.png?auto=webp&s=2ccd4fa616d7d412135f2bbea60f10e24f62c899', 'width': 1200}, 'variants': {}}]}
China bans its biggest tech companies from acquiring Nvidia chips, says report — Beijing claims its homegrown AI processors now match H20 and RTX Pro 6000D
748
2025-09-17T15:37:22
https://www.tomshardware.com/tech-industry/artificial-intelligence/china-bans-its-biggest-tech-companies-from-acquiring-nvidia-chips-says-report-beijing-claims-its-homegrown-ai-processors-now-match-h20-and-rtx-pro-6000d
balianone
tomshardware.com
1970-01-01T00:00:00
0
{}
1njgicz
false
null
t3_1njgicz
/r/LocalLLaMA/comments/1njgicz/china_bans_its_biggest_tech_companies_from/
false
false
default
748
{'enabled': False, 'images': [{'id': '8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?width=108&crop=smart&auto=webp&s=644ec354bbc375035b0f7996b94ae8e2803e14de', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?width=216&crop=smart&auto=webp&s=e5c4e455d028d07e04dd06e726ac38124245ace1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?width=320&crop=smart&auto=webp&s=e0ae6c06a867f19c5035cc5ca4ded6be473071dc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?width=640&crop=smart&auto=webp&s=a29ca54687a9865b636eb76fe44ba0da1943af79', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?width=960&crop=smart&auto=webp&s=3ad0d2f3cc2631ea21df841a780f58b8aba1a0c6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?width=1080&crop=smart&auto=webp&s=1e6cd94726f4cef414098bdbe89604ee82d4e220', 'width': 1080}], 'source': {'height': 3383, 'url': 'https://external-preview.redd.it/8TEqL7hV1nddGcswRwl5w0myECFYu5Ll4SYjP8Vx1jY.jpeg?auto=webp&s=dcf993b02afa01b14d612cbef5db43833e9ca45f', 'width': 6014}, 'variants': {}}]}
Qwen3 Coder Plus
52
Just noticed [https://openrouter.ai/qwen/qwen3-coder-plus](https://openrouter.ai/qwen/qwen3-coder-plus) (Not open though!)
2025-09-17T15:30:00
https://www.reddit.com/r/LocalLLaMA/comments/1njgb5x/qwen3_coder_plus/
Dependent_Factor_204
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njgb5x
false
null
t3_1njgb5x
/r/LocalLLaMA/comments/1njgb5x/qwen3_coder_plus/
false
false
self
52
{'enabled': False, 'images': [{'id': 'nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?width=108&crop=smart&auto=webp&s=1fa1894b46ac877c2a5e5f29c612190536ea7bd0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?width=216&crop=smart&auto=webp&s=c9be5bb74afdae9e888a6368981597675a6efe3b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?width=320&crop=smart&auto=webp&s=d28aad26987c63a1e3d82f79c7616a5fdcd510b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?width=640&crop=smart&auto=webp&s=67110649504aa73be25f1155af649c56c5907d22', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?width=960&crop=smart&auto=webp&s=219f7b7d0faba4d8dc2fd49715c0a20b7517377b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?width=1080&crop=smart&auto=webp&s=0f7dd2e0878e444731b53d867a8045cb6f9d2b38', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nKkn2q46kfnUUt46gfjZYwarHoQB3xchGtKUGuxDbqc.png?auto=webp&s=b57c3586839768b102313d92f9efaeb550a7dbde', 'width': 1200}, 'variants': {}}]}
What can you do with 3 RTX 3090s?
0
Seriously, I got these two other RTXs i was fixing fur a buddy 'ol pal if mine. just a repaste and a broken fan I had to deal with. but the guy is traveling, and he knows I am super stoked by ai, so he gave me the green card to really test those gpus. with mine I will have short access to 3 GPUS! And i wanted to do something neat with them. like a successful training job. What can i actually do with that kind of power? I thought about training a base model into an instruct one, even if by merging with a lora. But how big of a model I can actually work with? I heard the pci lane would be my biggest bottleneck, especially since one of the cards are connected to a pci 3.0 8x lol. Still, it could be used for a destilation job or something? what is the scope here? i know it is somewhere between "i won't be training a base model in my lifetime with this hardware" to "I could definitely train this small diffusion models on a couple of dozen images". but i never actually did a successful training job for llms and besides training diffusion models and making some ML projects on game engines, i have very little experience. What is a cool llm training project i should try to fit my rig?
2025-09-17T15:05:22
https://www.reddit.com/r/LocalLLaMA/comments/1njfntg/what_can_you_do_with_3_rtx_3090s/
Claxvii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njfntg
false
null
t3_1njfntg
/r/LocalLLaMA/comments/1njfntg/what_can_you_do_with_3_rtx_3090s/
false
false
self
0
null
Evals in 2025: going beyond simple benchmarks to build models people can actually use (aka all the evals you need to know as of Sept 2025 to build actually useful models, an update of the LLM evaluation guidebook)
8
2025-09-17T14:38:45
https://github.com/huggingface/evaluation-guidebook/blob/main/yearly_dives/2025-evaluations-for-useful-models.md
clefourrier
github.com
1970-01-01T00:00:00
0
{}
1njexzd
false
null
t3_1njexzd
/r/LocalLLaMA/comments/1njexzd/evals_in_2025_going_beyond_simple_benchmarks_to/
false
false
default
8
{'enabled': False, 'images': [{'id': 'Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?width=108&crop=smart&auto=webp&s=82f1639804a1dfdcefc3f5d83ba4c7ce8f303a90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?width=216&crop=smart&auto=webp&s=4ef73f2a4d816c3744d135f8a4d9bfc5095b5923', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?width=320&crop=smart&auto=webp&s=da3a58df3160e03700142bf0f6e381aae92dda67', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?width=640&crop=smart&auto=webp&s=f935d70916141f1b4b2f959752beca2f487ec4a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?width=960&crop=smart&auto=webp&s=65edaa65642ce63a55c51aea96429cfc15c5e183', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?width=1080&crop=smart&auto=webp&s=2103bb4ec23d7134e7c153b0bcdaa7397ceffdcb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Nj4dfLFTeWI1yBLiWdTuNEfXq73464pV0TImy7zMAXQ.png?auto=webp&s=58c50febb6a2730d6838a005c6b629a7fe282c42', 'width': 1200}, 'variants': {}}]}
IBM just released Granite Docling
187
granite-docling-258M with Apache 2.0 license for document analysis
2025-09-17T14:33:34
https://huggingface.co/collections/ibm-granite/granite-docling-682b8c766a565487bcb3ca00
ApprehensiveAd3629
huggingface.co
1970-01-01T00:00:00
0
{}
1njet2z
false
null
t3_1njet2z
/r/LocalLLaMA/comments/1njet2z/ibm_just_released_granite_docling/
false
false
default
187
{'enabled': False, 'images': [{'id': '9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?width=108&crop=smart&auto=webp&s=19781a169d12015c3badb42d9597ae1dcbd15476', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?width=216&crop=smart&auto=webp&s=7b0241e0fd95a562b69d9ebf013d37487c1b5262', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?width=320&crop=smart&auto=webp&s=f1e1c0a7375728a9650bf1e2f5b5f1d05719c0ed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?width=640&crop=smart&auto=webp&s=c208e780cd41278802ac7163a907c2de39c8d987', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?width=960&crop=smart&auto=webp&s=b1fa2a74fa947642177a4d979ad44a6828629285', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?width=1080&crop=smart&auto=webp&s=424b8edb42cbda1beec09c76c0c3d5df45b2238f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9VrSOe38oy5d5NsTP4RWGmhv_WIFkf4SUZ5rRkZXUAc.png?auto=webp&s=d1fcf4c47bc5ad3a3848c509a796bbbfd4d9626b', 'width': 1200}, 'variants': {}}]}
Our GitHub RAG repo just crossed 1000 GitHub stars. Get Answers from agents that you can trust and verify
16
We have added a feature to our RAG pipeline that shows **exact citations, reasoning and confidence.** We don't not just tell you the source file, but the **highlight exact paragraph or row** the AI used to answer the query. Click a citation and it scrolls you straight to that spot in the document. It works with **PDFs, Excel, CSV, Word, PPTX, Markdown**, and other file formats. It’s super useful when you want to **trust but verify** AI answers, especially with long or messy files. We’ve open-sourced it here: [https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai) Would love your feedback or ideas! We also have built-in data connectors like Google Drive, Gmail, OneDrive, Sharepoint Online and more, so you don't need to create Knowledge Bases manually. Demo Video: [https://youtu.be/1MPsp71pkVk](https://youtu.be/1MPsp71pkVk) Always looking for community to adopt and contribute
2025-09-17T14:22:39
https://www.reddit.com/r/LocalLLaMA/comments/1njeixn/our_github_rag_repo_just_crossed_1000_github/
Effective-Ad2060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njeixn
false
null
t3_1njeixn
/r/LocalLLaMA/comments/1njeixn/our_github_rag_repo_just_crossed_1000_github/
false
false
self
16
{'enabled': False, 'images': [{'id': 'hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?width=108&crop=smart&auto=webp&s=63a546b8ac654187ee9b0d14224e852ef0c3d692', 'width': 108}], 'source': {'height': 99, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?auto=webp&s=47e8987d3d53065768b4c796fa5af51c7a36d470', 'width': 111}, 'variants': {}}]}
Feedback for LYRN
7
If you had downloaded and used LYRN over the weekend after I launched it on Friday I would like some feedback. I haven't heard anything good or bad other than it runs on Mac, Linux and PC with no issues. If you haven't had a chance to look at it and try it out, please do and get back to me here in this thread or in my DMs. I mainly am asking because I'm about to do a round of bug fixes and feature updates and I want to see what other people want added. Maybe some personal thoughts and constructive feedback would be great too. Thank you for your time and effort to help bring open source software further along. https://github.com/bsides230/LYRN https://youtu.be/t3TozyYGNTg?si=amwuXg4EWkfJ_oBL
2025-09-17T14:13:35
https://www.reddit.com/gallery/1njeak5
PayBetter
reddit.com
1970-01-01T00:00:00
0
{}
1njeak5
false
null
t3_1njeak5
/r/LocalLLaMA/comments/1njeak5/feedback_for_lyrn/
false
false
https://b.thumbs.redditm…1Eoded9bcNAA.jpg
7
null
Hosting Medgemma 4b
2
Hello guys, I am managing a medical student learning platform in France that uses some AI, and I was curious about Medgemma 4b. I saw that it is a vision model, so I thought I could use this model to help medical students understand medical imaging and train. This is why I have some questions. First, are there providers of api endpoints for this model ? I did not find one, and it is pretty obvious why but I wanted to ask to be sure. Second, I want to know if I can host this model for my students, let's say 100 students per day use it. I know it is a medium/small size model, but what specs do I need to host this at an acceptable speed ? , Last, do you know a better/alternative model to MedGemma 4b for medical imaging/vision ? That are open source or even close source so I can use the api. Thanks guys for your help and advice!
2025-09-17T14:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1nje1vn/hosting_medgemma_4b/
aliihsan01100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nje1vn
false
null
t3_1nje1vn
/r/LocalLLaMA/comments/1nje1vn/hosting_medgemma_4b/
false
false
self
2
null
Hey, how do I setup OAP-langchain? I want something I can use to query offline documentation and man+pages and do no-limit deep research.
1
I want this to work fully offline. Thanks.
2025-09-17T13:20:46
https://www.reddit.com/r/LocalLLaMA/comments/1njczvo/hey_how_do_i_setup_oaplangchain_i_want_something/
Hot_Watercress5440
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1njczvo
false
null
t3_1njczvo
/r/LocalLLaMA/comments/1njczvo/hey_how_do_i_setup_oaplangchain_i_want_something/
false
false
self
1
null