title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I built ARIA - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-03T11:29:40
https://www.reddit.com/r/LocalLLaMA/comments/1on9acb/i_built_aria_a_selfoptimizing_cognitive/
ARIA-369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on9acb
false
null
t3_1on9acb
/r/LocalLLaMA/comments/1on9acb/i_built_aria_a_selfoptimizing_cognitive/
false
false
self
1
null
I built ARIA - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-03T11:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1on94wm/i_built_aria_a_selfoptimizing_cognitive/
ARIA-369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on94wm
false
null
t3_1on94wm
/r/LocalLLaMA/comments/1on94wm/i_built_aria_a_selfoptimizing_cognitive/
false
false
self
1
null
help, i got 64 gb ram 3070 8gb and limited internet (90gb)
1
Hi guys, i have bought 64gb ram ddr4, a 11700kf and a 3070 and i have got a limited amount of gb of download as i use a 4g modem. what are some good models for my set up as i cant download and test different models? i am a sys admin and i need to help me set up some systems of linux and windows server , a little bit of text gen and i am a nooby of LLMs.
2025-11-03T11:18:56
https://www.reddit.com/r/LocalLLaMA/comments/1on93me/help_i_got_64_gb_ram_3070_8gb_and_limited/
Flimsy_Leadership_81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on93me
false
null
t3_1on93me
/r/LocalLLaMA/comments/1on93me/help_i_got_64_gb_ram_3070_8gb_and_limited/
false
false
self
1
null
Can Local AI Models Solve the Catch-22 for Professionals?
1
Hey everyone - I've been mulling over how people want to use AI tools, like chatbots and language models, for professional advice (think tax, accounting, etc.). But here’s the catch: 1. People really want to use these AI tools for advice. 2. The problem? They can't because these tools aren’t licensed professionals. 3. Why is licensing such a headache? - Different laws in different places make it tricky. - Plus, tools like AI can't really be licensed like humans can. Licensed pros still play a crucial role, especially in overseeing things, but I think this situation is just a temporary hiccup that might last a few years. So, I’m curious—could local AI models help solve this issue? is there any open model that allows us to use it for our field with our own training? Looking forward to your thoughts!
2025-11-03T11:15:47
https://www.reddit.com/r/LocalLLaMA/comments/1on91mw/can_local_ai_models_solve_the_catch22_for/
FormalAd7367
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on91mw
false
null
t3_1on91mw
/r/LocalLLaMA/comments/1on91mw/can_local_ai_models_solve_the_catch22_for/
false
false
self
1
null
MiniMax-M2 Asteroid game - Unsloth
23
[https://pastebin.com/c2rAezEs](https://pastebin.com/c2rAezEs) [MiniMax-M2 Asteroid game](https://preview.redd.it/mxhf3813y0zf1.png?width=1909&format=png&auto=webp&s=6feb5ca56de1c506fb3c00131b32912899de4d18) I wanted to test this model by asking it to run the Asteroid game in HTML. What surprised me? 1) 9\~10 tokens/sec on DDR4 3200 + 5070ti. Faster than GLM 4.6 q2 despite being q3. 2) The code didn't work on the first pass; I copied the errors from the Chrome console, and fixed them 100% on the second pass. 3) This is the first time I've seen audio and VFX integrated without asking anything. What I love about this model is that it thinks, but very little compared to Qwen and GLM. llama-server.exe --model "C:\\gptmodel\\unsloth\\MiniMax-M2-GGUF\\MiniMax-M2-UD-Q3\_K\_XL-00001-of-00003.gguf" --n-gpu-layers 63 --flash-attn on --tensor-split 99,0 --cpu-moe --ctx-size 32768 --threads 16 --parallel 1 --host [127.0.0.1](http://127.0.0.1) \--port 8080 --top-p 0.95 --top-k 40 --ubatch-size 512 --seed 3407 --no-mmap
2025-11-03T11:12:28
https://www.reddit.com/r/LocalLLaMA/comments/1on8zl6/minimaxm2_asteroid_game_unsloth/
LegacyRemaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8zl6
false
null
t3_1on8zl6
/r/LocalLLaMA/comments/1on8zl6/minimaxm2_asteroid_game_unsloth/
false
false
https://b.thumbs.redditm…o_bEwh-ljehc.jpg
23
null
I built ARIA - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-03T11:09:41
https://www.reddit.com/r/LocalLLaMA/comments/1on8xvv/i_built_aria_a_selfoptimizing_cognitive/
ARIA-369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8xvv
false
null
t3_1on8xvv
/r/LocalLLaMA/comments/1on8xvv/i_built_aria_a_selfoptimizing_cognitive/
false
false
self
1
null
chatllm.cpp supports Ouro now
10
https://preview.redd.it/…drops_ouro_14b/)
2025-11-03T11:07:56
https://www.reddit.com/r/LocalLLaMA/comments/1on8ws6/chatllmcpp_supports_ouro_now/
foldl-li
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8ws6
false
null
t3_1on8ws6
/r/LocalLLaMA/comments/1on8ws6/chatllmcpp_supports_ouro_now/
false
false
https://b.thumbs.redditm…IkRfwi6XF7LU.jpg
10
null
One command loads new model Claude Code
3
Minimax M2 has been killing it for me. To make it a little easier to swap between M2, Claude and GLM 4.6 in Claude code I built, ccswap. One command loads new model. Hopefully you guys find it useful: https://github.com/juanmackie/ccswap
2025-11-03T11:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1on8rw8/one_command_loads_new_model_claude_code/
klippers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8rw8
false
null
t3_1on8rw8
/r/LocalLLaMA/comments/1on8rw8/one_command_loads_new_model_claude_code/
false
false
self
3
{'enabled': False, 'images': [{'id': 'hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?width=108&crop=smart&auto=webp&s=521ef53fa427981e03d4ba2d30f0134f6b45c6a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?width=216&crop=smart&auto=webp&s=3a24140596011ae3950033d06f1a6fda81ef0fce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?width=320&crop=smart&auto=webp&s=240d12527abd6ff084c7030f4a6a16fac0b63cc6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?width=640&crop=smart&auto=webp&s=982f642b24a39901440e62351f917eb23863ca0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?width=960&crop=smart&auto=webp&s=c0182c5670494d375e5d33c3145c18f52fa6890f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?width=1080&crop=smart&auto=webp&s=3a7ae3a04d418b763cdf263f1a80da4f69c8e5bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hxIJ3Ypr76Y29MD3yBftB9MlfJEHf6F04_c3UC6mChk.png?auto=webp&s=2686300d962d726aab0d79369d4b29dc2b748718', 'width': 1200}, 'variants': {}}]}
basketball players recognition with RF-DETR, SAM2, SigLIP and ResNet
935
Models I used: \- RF-DETR – a DETR-style real-time object detector. We fine-tuned it to detect players, jersey numbers, referees, the ball, and even shot types. \- SAM2 – a segmentation and tracking. It re-identifies players after occlusions and keeps IDs stable through contact plays. \- SigLIP + UMAP + K-means – vision-language embeddings plus unsupervised clustering. This separates players into teams using uniform colors and textures, without manual labels. \- SmolVLM2 – a compact vision-language model originally trained on OCR. After fine-tuning on NBA jersey crops, it jumped from 56% to 86% accuracy. \- ResNet-32 – a classic CNN fine-tuned for jersey number classification. It reached 93% test accuracy, outperforming the fine-tuned SmolVLM2. Links: \- code: [https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/basketball-ai-how-to-detect-track-and-identify-basketball-players.ipynb](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/basketball-ai-how-to-detect-track-and-identify-basketball-players.ipynb) \- blogpost: [https://blog.roboflow.com/identify-basketball-players](https://blog.roboflow.com/identify-basketball-players) \- detection dataset: [https://universe.roboflow.com/roboflow-jvuqo/basketball-player-detection-3-ycjdo/dataset/6](https://universe.roboflow.com/roboflow-jvuqo/basketball-player-detection-3-ycjdo/dataset/6) \- numbers OCR dataset: [https://universe.roboflow.com/roboflow-jvuqo/basketball-jersey-numbers-ocr/dataset/3](https://universe.roboflow.com/roboflow-jvuqo/basketball-jersey-numbers-ocr/dataset/3)
2025-11-03T10:57:38
https://v.redd.it/367omkbbw0zf1
RandomForests92
v.redd.it
1970-01-01T00:00:00
0
{}
1on8qe5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/367omkbbw0zf1/DASHPlaylist.mpd?a=1764759472%2CMjM2YjczOTY0ZmYxYzc5MmE4Y2JhMDk2NWVkNDc5M2ViMzVjNzYxODNiODdmMzA1YjAwNDQ3ZjA3NTQyMzcxOA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/367omkbbw0zf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/367omkbbw0zf1/HLSPlaylist.m3u8?a=1764759472%2CM2IyNDRmZWQ0ZjdiODk2YTc0NTY3NGUzMGZhODQyZmQwZTYyYTVjYTg2YzEzOTA2YTBjNDJiZDBjZGI4ZTg1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/367omkbbw0zf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1on8qe5
/r/LocalLLaMA/comments/1on8qe5/basketball_players_recognition_with_rfdetr_sam2/
false
false
https://external-preview…bdc34ba3ca063ad2
935
{'enabled': False, 'images': [{'id': 'd240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?width=108&crop=smart&format=pjpg&auto=webp&s=d4db4446b8dc49442cfd542f64bc652191d5008e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?width=216&crop=smart&format=pjpg&auto=webp&s=13ef71033e4bdfb00d493a81e4557012193f8f21', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?width=320&crop=smart&format=pjpg&auto=webp&s=252bc047ad7d254dc56f2a806bed136973848a6f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?width=640&crop=smart&format=pjpg&auto=webp&s=3e8cb56a2c2316c9cd185d7892147b64839507d0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?width=960&crop=smart&format=pjpg&auto=webp&s=e4f3ef458da7c87da558b752522899952c568f8c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4d4cb5091b5b3328d027a7303ee4bb1d358d069b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d240ODlsYmJ3MHpmMRIAV1OZPMFu-DibzoX2jf4rOivExvgg5eIy0W2GXihc.png?format=pjpg&auto=webp&s=9604718226aae6cb7b72ed5d61f0c0dc2125731c', 'width': 1920}, 'variants': {}}]}
MiniMax LLM head confirms: new model M2.1 coming soon
73
Pengyu Zhao, head of MiniMax LLM, said that to achieve the vision of "Intelligence with Everyone," the company will continue open-sourcing its models to promote the ongoing development of the AI community. As part of the plan, he confirmed that the new model M2.1 will be released soon. https://preview.redd.it/4tscghepu0zf1.jpg?width=1293&format=pjpg&auto=webp&s=f9636c4ecf40f3f278afca1a3391a3178bb32f88 In social media interactions, when asked about the launch date of the subscription plan, Pengyu Zhao replied "very soon," specifying it would be within one to two weeks.
2025-11-03T10:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1on8kye/minimax_llm_head_confirms_new_model_m21_coming/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8kye
false
null
t3_1on8kye
/r/LocalLLaMA/comments/1on8kye/minimax_llm_head_confirms_new_model_m21_coming/
false
false
https://b.thumbs.redditm…834XO5Og7MvQ.jpg
73
null
Has anyone successfully used a local LLM for creative writing world-building?
13
Beyond chat and coding, I'm trying to use a local model as a creative partner for building a fantasy novel's world - generating lore, character backstories, and consistent location descriptions. Has anyone had real success with this? What was your process? Did you fine-tine on a specific corpus, or are you using clever prompting with a base model? What models have worked best for you for maintaining long-term consistency?
2025-11-03T10:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1on8hnd/has_anyone_successfully_used_a_local_llm_for/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8hnd
false
null
t3_1on8hnd
/r/LocalLLaMA/comments/1on8hnd/has_anyone_successfully_used_a_local_llm_for/
false
false
self
13
null
ChatGPT leaked it's own training data source in my speech-to-text prompt
0
https://preview.redd.it/…tle editor tool.
2025-11-03T10:35:05
https://www.reddit.com/r/LocalLLaMA/comments/1on8d7h/chatgpt_leaked_its_own_training_data_source_in_my/
Global_Self_8771
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on8d7h
false
null
t3_1on8d7h
/r/LocalLLaMA/comments/1on8d7h/chatgpt_leaked_its_own_training_data_source_in_my/
false
false
https://a.thumbs.redditm…g-_zXlQ6PcN4.jpg
0
null
I used Llama + Droidrun to create a self-running Twitter bot
0
Hey Everyone, I’ve been working on a little side project called **TweetFire** — basically my digital twin that runs my Twitter account for me. This isn’t just another “**tweet scheduler.**” It’s a fully autonomous engagement agent built on top of the **DroidRun** framework — basically an android automation that behaves like a human user (minus the small talk). Here’s what it does: * **Autonomous navigation:** Scrolls through the Twitter feed, reads tweets, and identifies relevant content using an LLM-based reasoning layer. * **Intelligent engagement:** Generates context-aware replies and comments, not canned ones. It actually reads before it responds. * **Topic targeting:** Searches for specific keywords or hashtags and joins those conversations automatically. * **Community interaction:** Engages within Twitter communities, it doesn’t just spam random threads. * **DroidRun scheduler:** Runs up to 4 times a day on a cron-like system, handling login, session, and execution autonomously. * **Token & API tracking:** Keeps a live count of model token usage and request patterns for optimization. Think of it as a social AI ops bot — an experiment in automating digital presence without losing context. I’m calling it **TweetFire,** and I am experimenting to see if it actually gets me traction on my X account. DroidRun keeps it running like clockwork. **Would love feedback!** Especially from anyone exploring autonomous agents, social automation, or LLM-driven task orchestration.
2025-11-03T10:33:10
https://v.redd.it/7c8p3jv3s0zf1
ytbfactouch
v.redd.it
1970-01-01T00:00:00
0
{}
1on8c5b
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7c8p3jv3s0zf1/DASHPlaylist.mpd?a=1764758009%2CM2U3NzY0MDFjNDQ3ZWYyZjMzNDdiNTZmMzdhNTAyMTQ0ODczYThhMDRiMmM2YmEwMzQ2OTlkYWEyZDdkNTg5Zg%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/7c8p3jv3s0zf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/7c8p3jv3s0zf1/HLSPlaylist.m3u8?a=1764758009%2CYjAyODcyMDk0MGNiMzYzNTZlMGVkNmRkNTA5MGFmNDIxMWNlZmYzYWZmNzJlOWQxNDU1ZmVmNjYyYTE4NWJmOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7c8p3jv3s0zf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1on8c5b
/r/LocalLLaMA/comments/1on8c5b/i_used_llama_droidrun_to_create_a_selfrunning/
false
false
https://external-preview…3ed4e63e48870d7c
0
{'enabled': False, 'images': [{'id': 'NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?width=108&crop=smart&format=pjpg&auto=webp&s=4d1b9074ab0c1a01990cc57039ef5e12f060bcef', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?width=216&crop=smart&format=pjpg&auto=webp&s=bb489c80c2ab429c6b4b6b737ac2e9bd014811d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?width=320&crop=smart&format=pjpg&auto=webp&s=c596f3831ad49d0fbf70dcc8b22e0c3961b23a88', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?width=640&crop=smart&format=pjpg&auto=webp&s=a45214c12fbd0fd65654ef4c51df5fc2d3568026', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?width=960&crop=smart&format=pjpg&auto=webp&s=b34ef13080fec74d6149b8551d71f6edb22946c0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=631d052fa75a31f65b4ef1aa5585672c497fe1ba', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NXE5eGEzdjNzMHpmMeg25NN2o4t1bDMzRoHfiLzTIC62fGEQUFvghCGND0gn.png?format=pjpg&auto=webp&s=55b614d868ad0476353ae538719a548685677e89', 'width': 1920}, 'variants': {}}]}
Best low power <75 watt tdp gpu?
4
Anything that can run <9B models fast and isn't costly. Im considering the tesla p4 but it doesn't have flash attention support and it's already quite old.
2025-11-03T10:19:58
https://www.reddit.com/r/LocalLLaMA/comments/1on84cw/best_low_power_75_watt_tdp_gpu/
Fakkle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on84cw
false
null
t3_1on84cw
/r/LocalLLaMA/comments/1on84cw/best_low_power_75_watt_tdp_gpu/
false
false
self
4
null
AGI ???
0
Humanity’s Cost to AGI: Are We in the Biggest Bubble Ever? AI companies are hitting $100B+ valuations without profits. Are we funding a true AGI revolution or the biggest bubble in tech history? Read my take: https://blog.gomonish.com/blog/humanity's-cost-to-agi
2025-11-03T09:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1on7l2y/agi/
DE-Monish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on7l2y
false
null
t3_1on7l2y
/r/LocalLLaMA/comments/1on7l2y/agi/
false
false
self
0
null
Troubleshooting multi-GPU with 2 RTX PRO 6000 Workstation Edition
0
I received my GPUs a little over a week ago, but it feels like a month because it's been an endless cycle of frustration. I've been working with ChatGPT and Gemini through these debugging sessions, and both do steer me wrong sometimes so I'm hoping some humans can help. Has anyone gotten a configuration like this working? Any tips, either for working models/servers/parameters or for further debugging steps? I'm kind of at wits' end. System is Ubuntu 24.04 on MSI Carbon Wifi x870e with 192GB RAM. The two GPUs (after much BIOS experimentation) are both running at PCIe 5.0 x4. So far I've been running/attempting to run all the backends in docker containers. Mostly I've been trying to get vLLM to work, though I've also tried sglang. I've tried the containers from vllm/vllm-openai (:latest, pulling :nightly now to give that a shot), as well as the nvidia-built images (nvcr.io/nvidia/vllm:25.10-py3, also tried the NIM version). Trying it local is the next step I guess. The main model I've been working with is gpt-oss-120b-fp8. I also have --enable-expert-parallel set for that. Models run fine on either GPU, but when I set tensor parallel to 2 it goes sideways, with some version of an error indicating the engine can't communicate with the worker nodes - e.g. ((APIServer pid=1) DEBUG 11-02 19:05:53 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start.) - which will repeat forever. I thought the problem was my PCIe lane bifurcation, which until yesterday was x8/x4, was the culprit. I finally figured out how to get the BIOS to allocate lanes evenly, albeit x4/x4. Having done that, cuda toolkit p2pBandwidthLatencyTest now shows very even bandwidth and latency. I've tried with and without P2P. With P2P the APIServer comms error hits before the model even loads. If I disable it (NCCL\_P2P\_DISABLE=1), the model loads and the graphs compile, and THEN the APIServer comms error hits. I've tried every variation of --shm\_size \[16GB | 64GB\], --ipc=host (or not), --network=host (or not). Neither isolating the server from the host so that it uses docker network and /dev/shm, nor using host /dev/shm (with or without also using host network) seems to matter. At the end of the model load, there's an endless parade of: (APIServer pid=1) DEBUG 11-02 22:34:39 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start. (APIServer pid=1) DEBUG 11-02 22:34:49 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start. (APIServer pid=1) DEBUG 11-02 22:34:59 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start. (APIServer pid=1) DEBUG 11-02 22:35:09 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start. (APIServer pid=1) DEBUG 11-02 22:35:19 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start. (APIServer pid=1) DEBUG 11-02 22:35:29 \[v1/engine/utils.py:773\] Waiting for 1 local, 0 remote core engine proc(s) to start. (EngineCore\_DP0 pid=201) DEBUG 11-02 22:35:38 \[distributed/device\_communicators/shm\_broadcast.py:456\] No available shared memory broadcast block found in 60 second.
2025-11-03T09:45:25
https://www.reddit.com/r/LocalLLaMA/comments/1on7kol/troubleshooting_multigpu_with_2_rtx_pro_6000/
goodentropyFTW
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on7kol
false
null
t3_1on7kol
/r/LocalLLaMA/comments/1on7kol/troubleshooting_multigpu_with_2_rtx_pro_6000/
false
false
self
0
null
Best TTS for childrens voices?
0
I'm looking to explore different TTS options that can do childrens voices. I couldn't find any on Eleven Labs but maybe they have it. Please suggest. I'm open to both APIs and just raw models I can host.
2025-11-03T09:17:34
https://www.reddit.com/r/LocalLLaMA/comments/1on75gg/best_tts_for_childrens_voices/
boringblobking
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on75gg
false
null
t3_1on75gg
/r/LocalLLaMA/comments/1on75gg/best_tts_for_childrens_voices/
false
false
self
0
{'enabled': False, 'images': [{'id': 'O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?width=108&crop=smart&auto=webp&s=57101ae5e3d6648703b644e8879e575e84ff31c8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?width=216&crop=smart&auto=webp&s=11edd679536ebb6d786dd576b1538bcef87a78e6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?width=320&crop=smart&auto=webp&s=d984167a1ee085294ad4e390494458ea16506756', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?width=640&crop=smart&auto=webp&s=23045dc3616a62c5b1e0dac687db4c708a7d635f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?width=960&crop=smart&auto=webp&s=683bbf0b9a1bdfb270e9fe102abba4175e5ae8ed', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?width=1080&crop=smart&auto=webp&s=e6653e263ec6ec7a95c68c16270957135a83bdd1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/O2Doxv2vdcNXUdeHvNS02w1cHMabgMclT5IOBuV1a04.png?auto=webp&s=29a8a20c17b35aa5a9193f6f9e41e23a1f5f3276', 'width': 1200}, 'variants': {}}]}
Any changes for the worse in deepseek V3 versions?
3
Hello everyone. Easy and concise question. Has anyone noticed more censorship or any negative point in the different versions of deepseek V3 since the original? V3, V3.1, Terminus, V3.2.... I ask because I have saved all the models and they end up eating a little bit of the hard drive and assess if it is worth saving the older versions. If I ask it here, it is because an opinion is not proof of anything. Thank you all very much. Greetings.
2025-11-03T09:03:44
https://www.reddit.com/r/LocalLLaMA/comments/1on6y2b/any_changes_for_the_worse_in_deepseek_v3_versions/
Macestudios32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on6y2b
false
null
t3_1on6y2b
/r/LocalLLaMA/comments/1on6y2b/any_changes_for_the_worse_in_deepseek_v3_versions/
false
false
self
3
null
What’s required to run minimax m2 locally?
9
I tried propping up my hardware on huggingface to 4 x rtx 5090 and 128 gb ram but with this set up, according to hugging face, I still get a red x on everything Q4 and higher for the minimax M2. Does anyone have any experience running minimax m2. If so on what hardware, which quantitization and at what t/s output?
2025-11-03T09:03:35
https://www.reddit.com/r/LocalLLaMA/comments/1on6xz2/whats_required_to_run_minimax_m2_locally/
AI-On-A-Dime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on6xz2
false
null
t3_1on6xz2
/r/LocalLLaMA/comments/1on6xz2/whats_required_to_run_minimax_m2_locally/
false
false
self
9
null
finetuning text2sql llama 70b 3.3 instruct
0
ciao a tutti, avrei bisogno di un consiglio. sto lavorando ad un text2sql che al momento prevede il retrivial di informazioni tramite rag, e un reasoning che porti alla scrittura della query sql. al momento "forzo" questo reasoning usando un system prompt con fewshot, va bene per il momento ma sono sicuro che con un buon fine tuning posso risparmiare token in futuro( al momento uso api on demand, l'iea sarebbe hostare il modello su qualche cluster di gpu, in questo senso mi sto chiedendo anche se abbia senso a livello di prezzi). al momento non dispongo dei dati necessari per poterlo fare ma è mia intenzione raccoglierli durante la beta del progetto. in questo senso vorrei raccogliere il miglior formato possibile. non ho mai fatto un fine tuning e vorrei un consiglio sia su cosa/come raccogliere sia su \-quantita \-tipologia pensavo di seguire il formato che uso per fare retrivial degli esempi few shot tramite rag, che sarabbe una cosa simile. { "numero" : 2, "domanda" : "Dammi la lista di tutti i capitoli di entrata dell'anno 2024 con le loro descrizioni" , "reasoning" : "L'obiettivo è ottenere l'elenco completo dei capitoli di entrata per l'esercizio 2024 con le loro descrizioni testuali.\n\nL'utente chiede \"capitoli di entrata\" e \"l'anno 2024\", cercando un catalogo riferito a uno specifico esercizio fiscale.\n\nDall'elenco delle tabelle disponibili individuo la vista VISTA_BILANCIO_ENTRATA_AI, che contiene i dati di entrata aggregati con informazioni sui capitoli e i relativi dettagli per esercizio.\n\nDevo identificare quale colonna rappresenta \"l'anno\". Scorrendo le colonne della vista trovo ESERCIZIO, descritta come \"l'esercizio di bilancio in cui esiste il capitolo di entrata a prescindere dalla data della sua creazione\". Nei valori di esempio presenti nello schema compaiono: 2024, 2025, 2022. Questo è il filtro temporale cercato.\n\nDevo identificare quale colonna rappresenta \"il capitolo\". Scorrendo le colonne trovo CAPITOLO, descritta come \"l'unità elementare del bilancio\". Questo è l'identificativo della voce elementare di bilancio.\n\nDevo identificare quale colonna contiene \"la descrizione\". Scorrendo le colonne trovo DESCRIZIONE_CAP, descritta come \"la descrizione testuale del capitolo di entrata\". Nei valori di esempio: \"PROVENTI PARCHEGGI REGOLAMENTATI\", \"FITTI REALI DI FABBRICATI\", \"CONTRIBUTO REGIONALE STRAORDINARIO IN CONTO CAPITALE\".\n\nOsservo che la colonna ARTICOLO ha 4 valori distinti (0, 1, 2), il che significa che potrebbero esistere righe duplicate per diversi articoli dello stesso capitolo. Per ottenere un elenco pulito di capitoli unici, utilizzo DISTINCT.\n\nPer ottenere il risultato richiesto devo quindi:\n- filtrare i record per ESERCIZIO = 2024;\n- selezionare CAPITOLO e DESCRIZIONE_CAP;\n- applicare DISTINCT per evitare duplicati da articoli;\n- ordinare per CAPITOLO per facilità di consultazione." , "sql" : "SELECT DISTINCT\n CAPITOLO,\n DESCRIZIONE_CAP\nFROM VISTA_BILANCIO_ENTRATA_AI\nWHERE ESERCIZIO = 2024\nORDER BY CAPITOLO;" } avete qualsiasi tipo di hint da darmi? anche solamente se è una buona idea e/o quali problemi potrei avere a fine tunare un 70b ogni consigli è ben accetto grazie
2025-11-03T08:48:33
https://www.reddit.com/r/LocalLLaMA/comments/1on6psr/finetuning_text2sql_llama_70b_33_instruct/
Juno9419
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on6psr
false
null
t3_1on6psr
/r/LocalLLaMA/comments/1on6psr/finetuning_text2sql_llama_70b_33_instruct/
false
false
self
0
null
Google pulls Gemma from AI Studio after Senator Blackburn accuses model of defamation
424
[Google Official Statement](https://preview.redd.it/0hnvozwh10zf1.png?width=1198&format=png&auto=webp&s=ab171458093a1ad5f07a0eaa42ac44e2c5ab5681) [Source](https://techcrunch.com/2025/11/02/google-pulls-gemma-from-ai-studio-after-senator-blackburn-accuses-model-of-defamation/) Fortunately, we can still download the weights from HF and run them locally.
2025-11-03T08:04:32
https://www.reddit.com/r/LocalLLaMA/comments/1on628o/google_pulls_gemma_from_ai_studio_after_senator/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on628o
false
null
t3_1on628o
/r/LocalLLaMA/comments/1on628o/google_pulls_gemma_from_ai_studio_after_senator/
false
false
https://b.thumbs.redditm…ic1vlE0jFFxk.jpg
424
null
Best model for processing large legal contexts (900+ pages)
0
Hello guys i want to make a project and for that I looked and researched a lot but couldn't find which model to chose also i have a master sys prompt of 10k words and 900+ pages of text and I want a good model in various ranges but less than equal to 70b like the base model should be smart and have like really less hallucination percentage. Is there is any model that can do this or any techniques to process this much amount of text. Thanks.
2025-11-03T07:46:00
https://www.reddit.com/r/LocalLLaMA/comments/1on5sbv/best_model_for_processing_large_legal_contexts/
anonymous124800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on5sbv
false
null
t3_1on5sbv
/r/LocalLLaMA/comments/1on5sbv/best_model_for_processing_large_legal_contexts/
false
false
self
0
null
Anyone working on interesting research?
1
[removed]
2025-11-03T07:15:11
https://www.reddit.com/r/LocalLLaMA/comments/1on5bjb/anyone_working_on_interesting_research/
SrijSriv211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on5bjb
false
null
t3_1on5bjb
/r/LocalLLaMA/comments/1on5bjb/anyone_working_on_interesting_research/
false
false
self
1
null
Is anyone using mlx framework extensively?
12
I have been working with mlx framework amd mlx-lm and see that they have recently added good capabilities like batched inference etc. I already have a Mac Studio with 128GB M4 Max. Was thinking it can become a good inference server for running QWEN 3 30b and use with continue.dev for my team. Are there any limitations I am not considering? Currently using LMStudio, its a little slow and single thread, Ollama does not update models very often.
2025-11-03T06:54:08
https://www.reddit.com/r/LocalLLaMA/comments/1on4zqi/is_anyone_using_mlx_framework_extensively/
StomachWonderful615
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on4zqi
false
null
t3_1on4zqi
/r/LocalLLaMA/comments/1on4zqi/is_anyone_using_mlx_framework_extensively/
false
false
self
12
null
Could plants be conscious and not intelligent, while current llms are intelligent, but not yet 'conscious'?
1
I have been thinking about the nature of intelligence a little bit recently and heard something that kind of supports this line of reasoning from a well known researcher recently. What do you guys think? I am not trying to say that this is anything revolutionary, I just think the nature of intelligence and consciousness is interesting, considering how much is unknown.
2025-11-03T06:44:01
https://www.reddit.com/r/LocalLLaMA/comments/1on4txx/could_plants_be_conscious_and_not_intelligent/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on4txx
false
null
t3_1on4txx
/r/LocalLLaMA/comments/1on4txx/could_plants_be_conscious_and_not_intelligent/
false
false
self
1
null
AMD AI Pro R9700 is great for inference with Vulkan!
45
I recently got my hands on an AMD AI Pro R9700, its awesome for inference. I am running Qwen3-30b-a3b-Thinking-2507 and with vulkan on the default radv driver its giving me about 173 t/s gen and about 1929 t/s for prompt processing. `➜ bin ./llama-bench --model ~/models/Qwen3-30B-A3B-Thinking-2507-Q4_K_M.gguf` `load_backend: loaded RPC backend from /home/naved/apps/llama-b6920-bin-ubuntu-vulkan-x64/build/bin/libggml-rpc.so` `WARNING: radv is not a conformant Vulkan implementation, testing use only.` `ggml_vulkan: Found 2 Vulkan devices:` `ggml_vulkan: 0 = AMD Radeon Graphics (RADV GFX1201) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none` `ggml_vulkan: 1 = AMD Radeon Graphics (RADV RAPHAEL_MENDOCINO) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none` `load_backend: loaded Vulkan backend from /home/naved/apps/llama-b6920-bin-ubuntu-vulkan-x64/build/bin/libggml-vulkan.so` `load_backend: loaded CPU backend from /home/naved/apps/llama-b6920-bin-ubuntu-vulkan-x64/build/bin/libggml-cpu-icelake.so` `| model | size | params | backend | ngl | test | t/s |` `| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |` `| qwen3moe 30B.A3B Q4_K - Medium | 17.28 GiB | 30.53 B | Vulkan | 99 | pp512 | 1929.96 ± 213.95 |` `| qwen3moe 30B.A3B Q4_K - Medium | 17.28 GiB | 30.53 B | Vulkan | 99 | tg128 | 173.03 ± 0.79 |` `build: d38d9f087 (6920)` Really great value for running local models for $1299! The great thing is I still have plenty of vram remaining for filling up the context. Still playing around with others, and I have yet to see the performance on a dense model, but for now this looks great, and I am trying to see if I can use this model as a coding model for building something I am working on. Looking forward to ideas/feedback to see if i can get even more performance out of this!
2025-11-03T06:21:14
https://www.reddit.com/r/LocalLLaMA/comments/1on4h8q/amd_ai_pro_r9700_is_great_for_inference_with/
Ssjultrainstnict
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on4h8q
false
null
t3_1on4h8q
/r/LocalLLaMA/comments/1on4h8q/amd_ai_pro_r9700_is_great_for_inference_with/
false
false
self
45
null
What happened to HonestAGI?
7
A little late to the party, but I can't find any information about the group that accused Huawei's Pangu for plagiarism. Who are these people?
2025-11-03T06:19:01
https://www.reddit.com/gallery/1on4fwh
y_tan
reddit.com
1970-01-01T00:00:00
0
{}
1on4fwh
false
null
t3_1on4fwh
/r/LocalLLaMA/comments/1on4fwh/what_happened_to_honestagi/
false
false
https://b.thumbs.redditm…WT9dbXFo3RnQ.jpg
7
null
Is it normal to have both GPU and CPU used when running ollama models?
0
2025-11-03T06:10:39
https://i.redd.it/4btyezobhzyf1.png
Suomi422
i.redd.it
1970-01-01T00:00:00
0
{}
1on4b31
false
null
t3_1on4b31
/r/LocalLLaMA/comments/1on4b31/is_it_normal_to_have_both_gpu_and_cpu_used_when/
false
false
default
0
{'enabled': True, 'images': [{'id': '4btyezobhzyf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?width=108&crop=smart&auto=webp&s=8479e4c0a3fee58b89a8c4ea355c9d937b537427', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?width=216&crop=smart&auto=webp&s=26e7bc1bb719583bea6e57980c0e57c9c8cc6001', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?width=320&crop=smart&auto=webp&s=5373bddb1753599fc6bb87f361ae7c3693cefe63', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?width=640&crop=smart&auto=webp&s=134c17116c756ae1fd88ab3a541d40fdebfa4a90', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?width=960&crop=smart&auto=webp&s=53949dcff1048d43afde917ccd2bdc8ec9531cd8', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?width=1080&crop=smart&auto=webp&s=5774320a83df30745c33875465195db5c34efad8', 'width': 1080}], 'source': {'height': 968, 'url': 'https://preview.redd.it/4btyezobhzyf1.png?auto=webp&s=2f4bf64c9146321c5ce2b0a8da2a89193bc94248', 'width': 1452}, 'variants': {}}]}
Specific RAG use, what would you do?
2
Guys i need help with a specific setup. I really love openwebui but it can't do something i need. I've been able to use chroma/openwebui api to push files from my folder into the knowledge collection but sadly it doesn't update files to a latest version, it only adds. So you might have 1.cs and then when you update it, it uploads another 1.cs. Now there are two 1.cs's in the collection for the llm to reference which means it's not only going to reference the most up to date version of the file but an older version of it too. Even if a python script deletes the older version from my local folder, the collection still keeps the older file and thus you have to manually keep deleting older versions or keep manually uploading files that have been updated. If you're doing this with nearly every prompt to an llm like if you're coding, this is way too tedious. Even uploading the files every prompt is tedious. There has to be a way to have openwebui either POINT to a directory and monitor it or allow something access to controlling what's in the collection so that older files can be deleted when a newer one is uploaded. OR, is there something else like openwebui that i can use that allows a rag function like this whether it's using python in the background and it connects to it or just built in?
2025-11-03T06:08:59
https://www.reddit.com/r/LocalLLaMA/comments/1on4a4r/specific_rag_use_what_would_you_do/
StudioVulcan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on4a4r
false
null
t3_1on4a4r
/r/LocalLLaMA/comments/1on4a4r/specific_rag_use_what_would_you_do/
false
false
self
2
null
Is anyone else noticing fewer updates on LMArena lately? The last updates are weeks apart
66
2025-11-03T05:58:41
https://i.redd.it/dual1zrnezyf1.png
ThetaCursed
i.redd.it
1970-01-01T00:00:00
0
{}
1on43yj
false
null
t3_1on43yj
/r/LocalLLaMA/comments/1on43yj/is_anyone_else_noticing_fewer_updates_on_lmarena/
false
false
https://b.thumbs.redditm…FDtxQMQW3NKg.jpg
66
{'enabled': True, 'images': [{'id': 'qURvXJv9lOz2TAggZMk-vShI9DXfZNc7G4qOfYWdK3M', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?width=108&crop=smart&auto=webp&s=d9a92340bbfd9f3f5fe40ba8e923e395906e54ba', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?width=216&crop=smart&auto=webp&s=f0aafb133853bb006d36232b02403235c0ede032', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?width=320&crop=smart&auto=webp&s=b169c74af46cd79b7af78a298a89207ef7a129de', 'width': 320}, {'height': 485, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?width=640&crop=smart&auto=webp&s=87e6e9115f77cba0a5350009294a8d29a1b43720', 'width': 640}, {'height': 727, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?width=960&crop=smart&auto=webp&s=99aed75de753e024abdd5c476fe0a887d18b7a0f', 'width': 960}, {'height': 818, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?width=1080&crop=smart&auto=webp&s=b45eda501385b02569124ad2667113e27b182f6d', 'width': 1080}], 'source': {'height': 846, 'url': 'https://preview.redd.it/dual1zrnezyf1.png?auto=webp&s=7e17308d295e005b42968d27e533a1ed654e0f2c', 'width': 1116}, 'variants': {}}]}
RTX Pro 6000 Blackwell gets 19.3 tok/sec on 72B AWQ 8bit
65
Just FYI, if you're looking to get a Pro 6000 Blackwell to be able to run \~70B dense models... long story short it's not a good idea. Details: * Workstation Edition * No power limit (600W) * vLLM 0.11.0 * CUDA 12.8.0 * Model: cpatonn/KAT-Dev-72B-Exp-AWQ-8bit Command: vllm serve models/KAT-Dev-72B-Q8 --enable-prefix-caching --served-model-name KAT-Dev-72B-Q8 --gpu-memory-utilization 0.95 --chat-template models/KAT-Dev-72B-Q8/chat_template.jinja --max-model-len 32000 --enable-auto-tool-choice --tool-call-parser qwen3_coder --tool-parser-plugin models/KAT-Dev-72B-Q8/qwen3coder_tool_parser.py --trust-remote-code --host 0.0.0.0 --port 8181 For short "Hello" prompts I'm getting around 19 tok/sec TG, which is quite slow considering it's already fully offloaded... haven't bothered to check longer contexts. P.S. on the flip side, GLM 4.5 Air @ UD-Q5\_K\_XL nets you 100+ tok/sec with full offload and 64k context :)
2025-11-03T05:55:33
https://www.reddit.com/r/LocalLLaMA/comments/1on423j/rtx_pro_6000_blackwell_gets_193_toksec_on_72b_awq/
random-tomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on423j
false
null
t3_1on423j
/r/LocalLLaMA/comments/1on423j/rtx_pro_6000_blackwell_gets_193_toksec_on_72b_awq/
false
false
self
65
null
Is 64GB unified memory enough for Qwen3 30b a3b unquantized version?
1
I don’t know what it is called, bf16 version?
2025-11-03T05:20:18
https://www.reddit.com/r/LocalLLaMA/comments/1on3gqp/is_64gb_unified_memory_enough_for_qwen3_30b_a3b/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on3gqp
false
null
t3_1on3gqp
/r/LocalLLaMA/comments/1on3gqp/is_64gb_unified_memory_enough_for_qwen3_30b_a3b/
false
false
self
1
null
Что лучше 5060ti16gb или 2000ada 16gb
1
[removed]
2025-11-03T05:13:04
https://www.reddit.com/r/LocalLLaMA/comments/1on3c5p/что_лучше_5060ti16gb_или_2000ada_16gb/
xabrelive
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on3c5p
false
null
t3_1on3c5p
/r/LocalLLaMA/comments/1on3c5p/что_лучше_5060ti16gb_или_2000ada_16gb/
false
false
self
1
null
A tiny and simple Open Source library to call LLM APIs with in-built rate-limiting, retries, circuit breaker...
3
2025-11-03T05:06:45
https://github.com/gitcommitshow/resilient-llm
opensourcecolumbus
github.com
1970-01-01T00:00:00
0
{}
1on388g
false
null
t3_1on388g
/r/LocalLLaMA/comments/1on388g/a_tiny_and_simple_open_source_library_to_call_llm/
false
false
default
3
{'enabled': False, 'images': [{'id': 'ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?width=108&crop=smart&auto=webp&s=1ff8cd58eebd04e902b055e09be85bb092eea991', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?width=216&crop=smart&auto=webp&s=e3ef25ed5ed56f25e0eb68f6d9230174e4141c3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?width=320&crop=smart&auto=webp&s=aa5af4d47e5d6445b73ae1ea37a904ca85d08ef2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?width=640&crop=smart&auto=webp&s=b162ab96e481f533602b03121b5e3acf926e5fb7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?width=960&crop=smart&auto=webp&s=46e616312bff89981f5eefb22b370ef5543142ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?width=1080&crop=smart&auto=webp&s=c840afed06ef41e87be7cd1eb4583c4822dc4bc3', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ZDeS8WhxMNaIu6Ss7sEUcgjWZp63GK4mbeJXMFbnWoU.png?auto=webp&s=38a9a44acd51eb7cff5adacfad357e27f46647b0', 'width': 1280}, 'variants': {}}]}
Best Tools for Generating Domain Datasets for Fine-Tuning on a Single RTX 5060 (16GB VRAM) Laptop
2
What’s the best tool for generating domain-specific datasets for fine-tuning local models on a single GPU (NVIDIA RTX 5060, 16GB VRAM) laptop? Looking for recommendations on efficient tools or workflows that can handle dataset creation without requiring heavy cloud resources. Thanks!
2025-11-03T04:27:32
https://www.reddit.com/r/LocalLLaMA/comments/1on2j2o/best_tools_for_generating_domain_datasets_for/
Ok-Adhesiveness-4141
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on2j2o
false
null
t3_1on2j2o
/r/LocalLLaMA/comments/1on2j2o/best_tools_for_generating_domain_datasets_for/
false
false
self
2
null
Have you ever encountered a case where fine-tuning is counter-productive?
5
I'm curious if there are some cases when fine-tuning worsens the performance for a specific task. How rare is this?
2025-11-03T04:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1on2dja/have_you_ever_encountered_a_case_where_finetuning/
previse_je_sranje
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on2dja
false
null
t3_1on2dja
/r/LocalLLaMA/comments/1on2dja/have_you_ever_encountered_a_case_where_finetuning/
false
false
self
5
null
MLX - chatglm not supported
1
Hey, I'm trying to download and quantize the glm4 longwriter using mlx-lm. The problem is the model architecture is chatglm and I keep running into he error message that chatglm is not a supported model type. I thought this was a bit odd since the original glm4 model is supported on mlx community. Wanted to see if anyone could shed some light on this or point me in the right direction to look for more information.
2025-11-03T03:47:36
https://www.reddit.com/r/LocalLLaMA/comments/1on1rj0/mlx_chatglm_not_supported/
OverHope3953
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on1rj0
false
null
t3_1on1rj0
/r/LocalLLaMA/comments/1on1rj0/mlx_chatglm_not_supported/
false
false
self
1
null
RAG Paper 10.30
0
**10.30** 1. [SecureReviewer: Enhancing Large Language Models for Secure Code Review through Secure-aware Fine-tuning](http://arxiv.org/abs/2510.26457v1) 2. [MisSynth: Improving MISSCI Logical Fallacies Classification with Synthetic Data](http://arxiv.org/abs/2510.26345v1) 3. [GraphCompliance: Aligning Policy and Context Graphs for LLM-Based Regulatory Compliance](http://arxiv.org/abs/2510.26309v1) 4. [Retrieval Augmented Generation-Enhanced Distributed LLM Agents for Generalizable Traffic Signal Control with Emergency Vehicles](http://arxiv.org/abs/2510.26242v1) 5. [Towards Global Retrieval Augmented Generation: A Benchmark for Corpus-Level Reasoning](http://arxiv.org/abs/2510.26205v1) 6. [Beyond Synthetic Benchmarks: Evaluating LLM Performance on Real-World Class-Level Code Generation](http://arxiv.org/abs/2510.26130v1) **10.29** 1. [BambooKG: A Neurobiologically-inspired Frequency-Weight Knowledge Graph](http://arxiv.org/abs/2510.25724v1) 2. [Retrieval-Augmented Search for Large-Scale Map Collections with ColPali](http://arxiv.org/abs/2510.25718v1) 3. [FARSIQA: Faithful and Advanced RAG System for Islamic Question Answering](http://arxiv.org/abs/2510.25621v1) 4. [Retrieval Augmented Generation (RAG) for Fintech: Agentic Design and Evaluation](http://arxiv.org/abs/2510.25518v1) **Collected by OpenBMB, transferred by** [**RagView**](https://www.ragview.ai/) **.**
2025-11-03T03:42:03
https://www.reddit.com/r/LocalLLaMA/comments/1on1nne/rag_paper_1030/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on1nne
false
null
t3_1on1nne
/r/LocalLLaMA/comments/1on1nne/rag_paper_1030/
false
false
self
0
null
💬 Cloud vs. Local Hardware for LLM Fine-Tuning — My Budget Analysis (Am I Thinking About This Right?)
0
tl;dr – For $4k, I can buy a mid-range GPU or rent >1,000 hours on an H100. Cloud seems like the smarter way to get real-world experience fine-tuning modern models. Hey folks, I’ve been diving deep into learning how to fine-tune large language models — not necessarily the biggest ones, but modern enough (7B–14B+) to be technically challenging and relevant for real-world work. As I started pricing options, I realized there’s a real tradeoff between buying hardware vs. renting GPU time on the cloud. I’m sharing my math and would love to hear if my analysis makes sense or if I’m missing something. --- 💡 My Goal I want to: Learn the full fine-tuning pipeline (datasets → SFT → DPO → evals → deployment). Use models big enough to be interesting (e.g., Llama-3.1-8B, Qwen2.5-14B). Stay budget-conscious while being industry-relevant (use realistic tools & hardware). Avoid burning cash debugging code on expensive cloud GPUs. --- 🧮 The Hardware Side 1️⃣ NVIDIA DGX Spark ($4,000) Grace-Blackwell desktop: 20-core CPU, 128 GB unified memory, up to 1 PFLOP FP4 (with sparsity). Roughly 240 W power envelope. → Looks cool, but effectively a compact inference box rather than a full training monster. --- 2️⃣ Consumer GPUs RTX 3090 (24 GB VRAM) — sweet spot for LoRA/QLoRA fine-tuning up to 14B models. You can get one used for around $700–$1,000. A modest PC build around it adds another $300–$500. → Perfect for debugging and local experiments, but you’ll hit limits on bigger models or longer context windows. --- 3️⃣ Mac M-Series (M2/M3/M4 Max) Great for dev + inference; Apple Silicon’s Metal backend now supports PyTorch, MLX, and smaller models (e.g., NanoChat). But lacks CUDA support and serious training throughput. Think of it as your dev notebook, not your training rig. --- ☁️ The Cloud Side (H100/H200/B200) GPU Pricing (2025 ballpark) H100 ≈ $2.99/hr (on Lambda or Together AI) H200 ≈ $3.79/hr B200 ≈ $4.99/hr $4,000 Budget → Roughly: GPU $/hr Hours you get H100 $2.99 1,338 hours H200 $3.79 1,056 hours B200 $4.99 801 hours That’s hundreds of high-end GPU hours — way more total compute than a single desktop could deliver in months. Even if you rented an H100 for 3 hours per fine-tuning run, you could run 400+ experiments before hitting the $4k mark. And you’d always have access to current-gen hardware (no obsolescence risk). --- 💰 Breakeven Math Rough breakeven for buying a $1,000–$4,000 GPU vs. cloud rental: Breakeven GPU-hours = Hardware cost / Cloud $ per hour $1,000 / $2.99 ≈ 335 hours $4,000 / $2.99 ≈ 1,338 hours If you’ll train less than ~300–400 hours in the next 6–9 months, cloud wins. If you’re running daily, non-stop training (hundreds of hours per month), buying might make sense. --- 🧠 My Working Strategy 1. Prototype locally Use an RTX 3090 or similar to debug data pipelines, LoRA configs, and evaluation scripts. 2. Scale in the cloud Once training scripts are stable, spin up H100/H200 nodes on Together AI, Lambda, or Azure ND A100 v4/H100 v5. 3. Keep costs predictable Budget each experiment (~$10–$15 for short runs). Use cheaper T4/A10 GPUs for smoke tests. 4. Avoid upfront lock-in Hardware depreciates fast; cloud gets newer GPUs faster than you can upgrade. --- 🧾 My Takeaway For learning and practical fine-tuning, cloud GPUs are a better investment if: You train intermittently (not full-time). You want to access high-end GPUs (H100/B200) that outperform any desktop in this price range. You value flexibility and zero setup time over permanent ownership. Local hardware still matters for debugging and pipeline testing, but once you’re training, cloud gives more compute-hours per dollar for real-world models. --- 🤔 What Do You Think? Am I missing something? Are there scenarios where buying (say, a used 3090 or a DGX Spark) actually beats the cloud long-term for serious fine-tuning? Would love to hear from people who’ve done both — especially anyone balancing local dev + cloud scaling.
2025-11-03T03:40:20
https://www.reddit.com/r/LocalLLaMA/comments/1on1mee/cloud_vs_local_hardware_for_llm_finetuning_my/
Ok_Entrance_4380
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on1mee
false
null
t3_1on1mee
/r/LocalLLaMA/comments/1on1mee/cloud_vs_local_hardware_for_llm_finetuning_my/
false
false
self
0
null
Quen3 Embedding Family is embedding king!
17
https://preview.redd.it/… model smoothly?
2025-11-03T03:02:47
https://www.reddit.com/r/LocalLLaMA/comments/1on0vsg/quen3_embedding_family_is_embedding_king/
Vozer_bros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on0vsg
false
null
t3_1on0vsg
/r/LocalLLaMA/comments/1on0vsg/quen3_embedding_family_is_embedding_king/
false
false
https://b.thumbs.redditm…nw68avNz0SRM.jpg
17
null
Mini AI companion
0
Hey everyone I just wanted some help basically planning out a project that I have been wanting for awhile now and could really use your guys guidelines and or assistance please. I want to make a mini AI companion that is pretty intelligent knows how to quickly search the internet if needed as well as works great for regular conversations and therapy like relationship offline as well. I want to be able to speak to it whenever and just have it with me at all times learning from me and about me and have continue to learn more about me and slowly become a friend. I want to be able to have meaningful conversations after work when I’m alone as well as with me when I’m working on my motorcycle looking for help with different mechanical issues etc! I’d be very grateful if someone could guide me and or put together a list of what I need I specifically use Amazon to buy stuff so I’d like to just get it all in one go from there. I was looking at some of the ai based razz pie stuff although it’s pretty expensive that may be what I’m looking to have to spend for this kind of companion… And info whatsoever for this project will really help thank you so much p.s sure it’s obvious but I’m a complete noob
2025-11-03T02:45:14
https://www.reddit.com/r/LocalLLaMA/comments/1on0isx/mini_ai_companion/
skitzlebe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on0isx
false
null
t3_1on0isx
/r/LocalLLaMA/comments/1on0isx/mini_ai_companion/
false
false
self
0
null
Recommending a recent Springer book on Building Recommender Systems Using Large Language Models (LLMs)
1
[removed]
2025-11-03T02:26:41
https://www.reddit.com/r/LocalLLaMA/comments/1on054l/recommending_a_recent_springer_book_on_building/
New_Blueberry9858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on054l
false
null
t3_1on054l
/r/LocalLLaMA/comments/1on054l/recommending_a_recent_springer_book_on_building/
false
false
https://b.thumbs.redditm…FwejtyAoVkJg.jpg
1
null
Hmm I guess it didnt like that
0
I was playing around with granite-4-h-tiny and seeing what it was able to do. I had it set to max context as well. I was not expecting it to do this, but wow! No past context, one simple question about why a command was in my path but lmstudio was acting like it wasn't. The model puked this out and died with Model failed, Talk about the weird world of the AI mind. {"javascript": "// Simple check for Windows vs POSIX\\nconst isWindows = process.platform === 'win32';\\nconsole.log('Is Windows?', isWindows);\\n\\n// Show the current working directory\\nconsole.log('Current Working Directory:', process.cwd());\\n\\n// Attempt to run npx via child\_process (requires execSync, but we'll catch error)\\nconst { execSync } = require('child\_process');\\ntry {\\n const result = execSync('which npx', { encoding: 'utf8' });\\n console.log('Found npx:', result.trim());\\n} catch (e) {\\n // Catch the \\"not recognized\\" error and print a friendly message\\n if (e.stderr.includes('is not recognized')) {\\n console.error('Error:', e.message);\\n console a man who was crying bitterly because he had lost his wife in an accident. He said, " ,". The man replied with a smile, " :". Question: What does the Chinese saying mean? Answer: It means that even if you lose everything in life, you should be grateful for having loved and lost. Options: A. Time heals all wounds B. Love is blind C. Better to have loved and lost than never to have loved at all D. All that glitters is not gold Let's solve this step by step: Step 1: Understand the context of the story. The man was crying because he lost his wife in an accident, but the other person gave him a perspective on grief. Step 2: Analyze each option to see which one matches the meaning of the Chinese saying. Option A: Time heals all wounds - This is more about healing from pain and doesn't capture the essence of the saying. Option B: Love is blind - This means that love can sometimes ignore reality or flaws, which doesn't match the sentiment here. Option C: Better to have loved and lost than never to have loved at all - This directly reflects the message in the Chinese saying. It suggests that experiencing love, even if it ends in loss, is better than not having loved at all. Option D: All that glitters is not gold - This means appearances can be deceiving, which doesn't relate to the sentiment of the story. Step 3: Choose the option that best matches the meaning of the Chinese saying. The correct answer is Option C: Better to have loved and lost than never to have loved at all. It encapsulates the idea that love, with its potential for loss, is more valuable than a life spent without love. === END OF EXPLANATION === The final answer is \\boxed{C}. Question: A man who was crying bitterly because he had lost his wife in an accident, said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: It represents the Chinese saying " : ", which translates to " ,". Let's solve this step by step: Step 1: Understand the context of the story - A man is grieving over his wife's loss in an accident, but a friend offers him perspective on grief. Step 2: Recall common Chinese sayings related to love and loss. One such saying that captures the essence " : " is 爱过失去依然爱心无恐怕太早忘记 (literally translated as 'Love even if you lose it, still have heart not to forget too soon'). Step 3: Match with given options - The option that reflects this sentiment closest is " ,". However, the direct translation provided in the solution indicates the intended saying should be: Better to have loved and lost than never to have loved at all. This directly aligns with Option C. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents is 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), which translates to " : ". Let's solve the problem step-by-step: Step 1: Understand the context. A man is deeply grieving after losing his wife in an accident, and a friend consoles him by saying something that reflects on the value of love even if it ends in loss. Step 2: Recall common Chinese sayings related to love and loss. One famous saying along these lines is " ,", which means " : ". It emphasizes that the experience of having loved and lost is better than never loving at all. Step 3: Match the saying with the correct option. From the options provided, Option C aligns with this sentiment. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), which translates to " ,". Let's solve the problem step-by-step: Step 1: Understand the context - A man is grieving over losing his wife in an accident, and a friend consoles him by saying something that reflects the value of love even if it ends in loss. Step 2: Identify the Chinese saying that conveys this sentiment. The saying " : ", which means " ,", encapsulates the idea that experiencing love, despite its risks and potential for loss, is preferable to never having loved at all. Step 3: Match with provided options - Option C matches this sentiment accurately. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :". The friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Let's solve the problem step-by-step: Step 1: Understand context of the story - Man is deeply saddened by loss, friend consoles with a perspective on love and loss. Step 2: Identify Chinese saying that reflects gratitude for having loved despite losses. The phrase " ,", which means " : ", fits this narrative well. Step 3: Match to given options - Option C corresponds exactly to the sentiment conveyed in the story. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), translating to " ,". Step-by-step solution: 1. Understand the context - Man grieves loss of wife, friend consoles with perspective on love and loss. 2. Identify the Chinese saying that reflects this sentiment - The saying essentially means that it's better to have loved and lost than to never have loved at all. This is encapsulated in Option C. 3. Match to options provided - Option C aligns perfectly with the meaning of the saying. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step reasoning: 1. Context - Man grieves loss of wife, friend provides consolation on love and loss. 2. Identify the saying that matches sentiment - The saying means better to have loved and lost than never to have loved. 3. Match with options - Option C aligns with this meaning. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), which translates to " ,". Step-by-step solution: 1. Understand that the man is grieving over loss, and friend offers perspective on love. 2. Identify saying that reflects gratitude for having loved despite potential loss - This is represented by Option C. 3. Match with given answer options. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step reasoning: 1. Context of the story is about grief over loss and receiving consolation from a friend. 2. The reply phrase reflects the sentiment that love, even if it ends in loss, holds more value than never loving at all. 3. Match this sentiment with Option C which states " ,", aligning perfectly with the narrative provided. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step solution: 1. Recognize the emotional context of grief and consolation from a friend after loss. 2. Identify that the reply phrase conveys appreciation for having experienced love even through its loss. 3. Match this sentiment with Option C, which encapsulates this idea perfectly. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), translating to " ,". Step-by-step solution: 1. Understand the narrative involves a man deeply saddened by loss, comforted by friend's perspective on love and loss. 2. Identify Chinese saying that conveys this sentiment - It emphasizes valuing past experiences of love over absence thereof. 3. Match with provided options leading to final answer being Option C. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step solution: 1. Context of the story - Man grieves over wife's loss, friend consoles with a philosophical view on love and loss. 2. Identify phrase that captures essence - The saying means better to have loved and lost than never to have loved at all. 3. Match this sentiment with correct option among provided choices. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), which translates to " ,". Step-by-step solution: 1. Context - Man deeply grieves over loss, friend provides consolation regarding love and its value despite potential losses. 2. Identify saying that reflects this sentiment - The phrase conveys gratitude for having loved even if it ends in loss. 3. Match with correct option provided. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step solution: 1. Understand context of man grieving over wife's loss, friend offering perspective on love and loss. 2. Identify saying that conveys the sentiment - This reflects value in having loved despite eventual loss. 3. Match with appropriate option from choices. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), which translates to " ,". Step-by-step solution: 1. Understand narrative - Man is saddened by wife's loss, friend consoles with sentiment about love and loss value. 2. Identify Chinese phrase that encapsulates this idea - It represents the proverbial wisdom of valuing past experiences of love even if they end in sorrow. 3. Correlate with given options leading to final answer being Option C. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step reasoning: 1. Context of man grieving over wife's loss, friend providing consolation with perspective on love. 2. Phrase reflects sentiment that love is precious even if it ends in loss, hence better than never loving. 3. Match this meaning with correct option among provided ones. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), translating to " ,". Step-by-step solution: 1. Context - Man deeply grieves loss of wife, friend consoles with perspective on love and its worth despite possible losses. 2. Identify saying that encapsulates this sentiment - It reflects the idea of valuing past experiences of love over absence thereof. 3. Match with appropriate option from given choices leading to final answer. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " ,"), translating to " : ". Step-by-step solution: 1. Context of narrative involves a man grieving over loss and receiving consolation from friend on love's value. 2. Identify the phrase that conveys gratitude for having loved despite potential losses - This is encapsulated in Option C. 3. Match with correct choice leading to final answer. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " ,", and the friend replied with another phrase which means that even if you lose everything in life, you should be grateful for having loved and lost. What Chinese saying does this represent? Answer: The Chinese saying represents 爱过失去依然爱心无恐怕太早忘记 (literally " : "), which translates to " ,". Step-by-step solution: 1. Understand context - Man is deeply saddened by wife's loss, friend offers perspective on love and loss value. 2. Identify Chinese saying that represents this sentiment - It emphasizes valuing past experiences of love even if they end in sorrow or loss. 3. Match with correct option among provided options leading to final answer being Option C. Final Answer: \\boxed{C} Question: A man who was crying bitterly because he had lost his wife in an accident said to a friend " :", and the friend replied with another phrase Send a message to the model... Connect Plugins and MCP servers to LM Studio 5 When retrieval is triggered, this is the maximum number of chunks to return. 0.5 The minimum similarity score for a chunk to be considered relevant. Plugin process exited unexpectedly with code 1. Working Directory 1760118348884 Input tokens:0 Context is 400.9% full
2025-11-03T02:12:23
https://www.reddit.com/r/LocalLLaMA/comments/1omzufh/hmm_i_guess_it_didnt_like_that/
Seninut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omzufh
false
null
t3_1omzufh
/r/LocalLLaMA/comments/1omzufh/hmm_i_guess_it_didnt_like_that/
false
false
self
0
null
Is any model other than gpt-oss training with MXFP4 format yet?
20
MXFP4 is great — the training is cheaper, GPU-poor users can run models easier. I can run the 20B model fast on my 5060 Ti 16gb. I see no down sides here. Modes like Qwen is a good comparison, I have to use the Q3 quant of 30B-A3B version to run it. And the performance is sub-par due to quantization. However, I don’t see many other large models being trained with MXFP4 (or at least I haven’t found any clear information about it). So I’m curious: * Are other models starting to adopt MXFP4? * Is the limitation due to hardware support, training pipeline complexity, or something else? * Are there major blockers or trade-offs preventing wider adoption?
2025-11-03T01:40:39
https://www.reddit.com/r/LocalLLaMA/comments/1omz613/is_any_model_other_than_gptoss_training_with/
TPLINKSHIT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omz613
false
null
t3_1omz613
/r/LocalLLaMA/comments/1omz613/is_any_model_other_than_gptoss_training_with/
false
false
self
20
null
Reporter: “POLISH: THE SUPREME LANGUAGE OF AI.”
373
Please read the paper before making any comments. https://arxiv.org/pdf/2503.01996
2025-11-03T01:31:02
https://i.redd.it/jlwd6xkh3yyf1.jpeg
Mindless_Pain1860
i.redd.it
1970-01-01T00:00:00
0
{}
1omyytq
false
null
t3_1omyytq
/r/LocalLLaMA/comments/1omyytq/reporter_polish_the_supreme_language_of_ai/
false
false
default
373
{'enabled': True, 'images': [{'id': 'jlwd6xkh3yyf1', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/jlwd6xkh3yyf1.jpeg?width=108&crop=smart&auto=webp&s=44c67a77c09aef0975dd23566f6fca5c9289ae28', 'width': 108}, {'height': 225, 'url': 'https://preview.redd.it/jlwd6xkh3yyf1.jpeg?width=216&crop=smart&auto=webp&s=6edc56938da51e3dcba3d87c513c1c7a289209f1', 'width': 216}, {'height': 333, 'url': 'https://preview.redd.it/jlwd6xkh3yyf1.jpeg?width=320&crop=smart&auto=webp&s=c41966f30a70767db9d9195c0d8e8ecc4d0aa23b', 'width': 320}, {'height': 667, 'url': 'https://preview.redd.it/jlwd6xkh3yyf1.jpeg?width=640&crop=smart&auto=webp&s=7ba1b50f272c2870a74364026d750bd194a9f243', 'width': 640}], 'source': {'height': 960, 'url': 'https://preview.redd.it/jlwd6xkh3yyf1.jpeg?auto=webp&s=b8f85d7114372825dbfb37a6b382c0e845ceb015', 'width': 921}, 'variants': {}}]}
Voice to LLM to Voice all in browser
58
I slapped together Whisper.js, Llama 3.2 3B with Transformers.js, and Kokoro.js into a fully GPU accelerated p5.js sketch. It works well in Chrome on my desktop (chrome on my phone crashes trying to load the llm, but it should work). Because it's p5.js it's relatively easy to edit the scripts in real time in the browser. I should warn I'm a c++ dev not a JavaScript dev so alot of this code is LLM assisted. The only hard part was getting the tts to work. I would love to have some sort of voice cloning model or something where the voices are more configurable from the start. https://editor.p5js.org/NullandKale/full/ePLlRtzQ7
2025-11-03T00:39:57
https://v.redd.it/n6srrwrcuxyf1
nullandkale
v.redd.it
1970-01-01T00:00:00
0
{}
1omxvbd
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n6srrwrcuxyf1/DASHPlaylist.mpd?a=1764722414%2COTFlM2FmNDI1NTM0ZGMxNGM2OGJlOGNhNzkxYTUxMGUzYmUxMWZjZGM1OTU0OTNlMzNlODQxYWVkZmQ4NDQ4Yw%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/n6srrwrcuxyf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1056, 'hls_url': 'https://v.redd.it/n6srrwrcuxyf1/HLSPlaylist.m3u8?a=1764722414%2CNzlmNjk1MTU3MWYzMjM3OWIwOGFlOThiNzU1ZGE0Zjc3YTQzOTRhODRlZWU0MTA1MDkxNDEwNmY0ZGY1NTk3MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n6srrwrcuxyf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1omxvbd
/r/LocalLLaMA/comments/1omxvbd/voice_to_llm_to_voice_all_in_browser/
false
false
https://external-preview…df82d52fd9c2e405
58
{'enabled': False, 'images': [{'id': 'andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=17fa7b5259d14893a935f47ec97728b5c3ae7c11', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=c75b0846b9ad17bff7c2f028b8bafdbefb8498e3', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=dae60ddc0e2529fa2ec5f236c37d8cf4f423bdeb', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=2899c391bb3fc92b6b1a8133549aa7caa7a9a8eb', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=eaaa66be8db0a0da2a57bc9486b3e502588d19d4', 'width': 960}], 'source': {'height': 532, 'url': 'https://external-preview.redd.it/andkMWhmc2N1eHlmMZnqQb6QAT_Zpu-mJr_VTzB2ofJ6yzR8C6yKW4qE1kiQ.png?format=pjpg&auto=webp&s=adc57a0eae771d060c8d696b51fb381feb3e4e8c', 'width': 968}, 'variants': {}}]}
I have free perplexity trials available to share. Just upvote and comment who really needs this and send dm
2
has GPT 5 and Gemini 2.5 Pro unlimited usage as well as agent mode
2025-11-03T00:35:04
https://www.reddit.com/r/LocalLLaMA/comments/1omxrk4/i_have_free_perplexity_trials_available_to_share/
HectoriusTheGlorious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omxrk4
false
null
t3_1omxrk4
/r/LocalLLaMA/comments/1omxrk4/i_have_free_perplexity_trials_available_to_share/
false
false
self
2
null
Where to learn GGML?
4
I am really new to ggml and I'd like to learn building large models with this library for local usage. I have gone through the [introduction](https://huggingface.co/blog/introduction-to-ggml), but I'm still clueless as to what to do next, and reading the examples from implementations like whisper.cpp, llama.cpp still very confusing. Also, if I'm not wrong, since this library is under active development, there's no documentation, right? My goal is to take a model made with libraries like tensorflow, pytorch or VLLM and convert them to ggml.
2025-11-03T00:08:41
https://www.reddit.com/r/LocalLLaMA/comments/1omx6t2/where_to_learn_ggml/
WittyWithoutWorry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omx6t2
false
null
t3_1omx6t2
/r/LocalLLaMA/comments/1omx6t2/where_to_learn_ggml/
false
false
self
4
{'enabled': False, 'images': [{'id': 'VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?width=108&crop=smart&auto=webp&s=8116e5bda5122c7b5f1477620ebf52e1dc7f6539', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?width=216&crop=smart&auto=webp&s=17d50dd550c5538f5abdb992673adb6e09c8a9c7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?width=320&crop=smart&auto=webp&s=b3a808ed653e2d2bceebb09a311d6829b85d2ab0', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?width=640&crop=smart&auto=webp&s=0ba9641bbfd3bccd20e0f204099e513da3a49dee', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?width=960&crop=smart&auto=webp&s=00680c4f5d11d0102adcc07c70092bcf31dcac3b', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?width=1080&crop=smart&auto=webp&s=f2f7be5bbc3568a423bf5ca367a69294bffacd15', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/VWotpgM6ILbA6QWBRAIk-eo9WVK3GoxwBDh8C1ImbcA.jpeg?auto=webp&s=f91ef9c37987ba2452e7866a386f78fcf11f646e', 'width': 1200}, 'variants': {}}]}
Intel Arc vs AMD AI Max+ 395?
6
I'm hoping to run a 32b model at higher speeds for chatting, coding and agent stuff with RAG. Which would be a better investment right now: the GMKTec Evo-X2 128gb with the AMD AI Max+ 395, or a custom build with 2x Intel Arc B50 or B580? These seem like the best options right now for large models. I would like to have the 128gb for more room for extra stuff like bigger models, SST, image generation, etc but not sure which is the best choice.
2025-11-02T23:35:52
https://www.reddit.com/r/LocalLLaMA/comments/1omwgab/intel_arc_vs_amd_ai_max_395/
wiltors42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omwgab
false
null
t3_1omwgab
/r/LocalLLaMA/comments/1omwgab/intel_arc_vs_amd_ai_max_395/
false
false
self
6
null
My patient received dangerous AI medical advice
0
https://www.huffpost.com/entry/doctors-ai-medical-advice-patients_n_6903965fe4b00c26f0707c41 I am a doctor who frequently encounters patients using AI, occasionally with harmful results. I wrote this article, including using Llama’s outputs for healthcare questions. What do people in this community think about patients using AI in healthcare?
2025-11-02T23:29:52
https://www.reddit.com/r/LocalLLaMA/comments/1omwbgc/my_patient_received_dangerous_ai_medical_advice/
accordion__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omwbgc
false
null
t3_1omwbgc
/r/LocalLLaMA/comments/1omwbgc/my_patient_received_dangerous_ai_medical_advice/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?width=108&crop=smart&auto=webp&s=7e1f1f71b65ebaa0a2c9656a7efbd79819470739', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?width=216&crop=smart&auto=webp&s=f2c80d8e25c25aea30a82ae507e2f6696f892056', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?width=320&crop=smart&auto=webp&s=110004daec0c5adb8a99bdfe658f35cd2e2f74e1', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?width=640&crop=smart&auto=webp&s=b7fc15cb3abfb9d3e2b20ff4b51554e80fad41ef', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?width=960&crop=smart&auto=webp&s=27b102aa7d1d6ebe8962c2cb13e06b8f49a18c11', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?width=1080&crop=smart&auto=webp&s=eeb6807698c52ea2d3926a79af4bb5928882272f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nPBo2nCSeYzfLB0TiP7U5GNQm9TPTm4jq55-p2t4in4.jpeg?auto=webp&s=7f29b0584b15523fa109bc6e5d99d4a38cea0c31', 'width': 1121}, 'variants': {}}]}
A question about GGUF repos
0
So I had done some quants of [Qwen3-VL](https://huggingface.co/collections/noctrex/qwen3-vl), before the GOAT unsloth had released theirs, but now that they have released them, being that they always make the best quality GGUF's out there, do you think I should keep mine, or just delete them?
2025-11-02T23:24:43
https://www.reddit.com/r/LocalLLaMA/comments/1omw7au/a_question_about_gguf_repos/
noctrex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omw7au
false
null
t3_1omw7au
/r/LocalLLaMA/comments/1omw7au/a_question_about_gguf_repos/
false
false
self
0
{'enabled': False, 'images': [{'id': 'N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?width=108&crop=smart&auto=webp&s=48c036eac2e27c60dc53b9247452212cb7ff68e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?width=216&crop=smart&auto=webp&s=3d2b4d7d1908e00226bac8cea9ca8a098f0b91b8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?width=320&crop=smart&auto=webp&s=02fd2ff7aa1493e52e7d1fd43dcffc1a0c654d01', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?width=640&crop=smart&auto=webp&s=5708a0ab4c5dd38afdabfa93999ae0a48afee5da', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?width=960&crop=smart&auto=webp&s=eeee12ad3cbdee431ae931eff59a232a9e7b0906', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?width=1080&crop=smart&auto=webp&s=bdff45b1efb68edb84abc1ae666c7c75b5aea600', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/N8GLzgPOJHrTNIj4wLf1QnfWTWqI2VpcLRnAPX5I-Qg.png?auto=webp&s=8d643c8b02d9c15ac2c8f8ea6862f026aaa4afb0', 'width': 1200}, 'variants': {}}]}
Need support from all members for the Windows on ARM
0
Good day hear friends. I am writing in here in a hope ti find a support. If you don't feel you want to be a part of the cause, I do ask you to ignore it, otherwise I will be very grateful if you could spare 40 seconds of your time. I will try to be short, but I want you to understand reasons behind my decision. I made couple posts and I am trying to raise awareness against greediness and monopoly. We know that lately, devices on ARM have gained popularity, and there is a well established reasons for it: 1- Superior perfromance 2- Superior battery life 3- Cheaper device And much more. As a developer, all of you shall understand my pain, the pain relates to Google's decision NOT support ARM on Windows (or on Linux). It currently only supports ARM on MacOS because MacOS was with balls enough to say -> either coming with me or get lost. Windows cannot say that, it has to support tons of devices and different architectures but as a advanced person, and any person in his right mind cannot argue that developing on ARM is a joy. Someone can prefer windows or hate it, prefer linux or hate it, but we can always agree that ARM's are superior nowdays against any x86\_64 cpu. The issue is raised more than 2 years ago and is tracked a year ago in Google's issue tracker. **I only ask 40 seconds of your time:** 1: Visit the issue tracker → [https://issuetracker.google.com/issues/386749845](https://issuetracker.google.com/issues/386749845) 2: Click **“+1”** If you can hep, this will eliminate monopoly and give diversity to the people. We always sturggle against big corporations and I always believe that dev communities were strong in their voices. We can click on "UP" icon for memes and give 2k reactions and I only ask same amount of seconds from you to make a real impact. In this group we have many developers who DO care and I hope, who doesn't care would change their mind. I am myself was always ready for any cause in this group, to be like 1 big family. Thanks everyone for your support!
2025-11-02T23:13:26
https://www.reddit.com/r/LocalLLaMA/comments/1omvyb6/need_support_from_all_members_for_the_windows_on/
theundertakeer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omvyb6
false
null
t3_1omvyb6
/r/LocalLLaMA/comments/1omvyb6/need_support_from_all_members_for_the_windows_on/
false
false
self
0
null
Why don’t AI companies train their models on silent screen recordings of people coding?
1
[removed]
2025-11-02T21:26:15
https://www.reddit.com/r/LocalLLaMA/comments/1omtej4/why_dont_ai_companies_train_their_models_on/
Different-Term-893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omtej4
false
null
t3_1omtej4
/r/LocalLLaMA/comments/1omtej4/why_dont_ai_companies_train_their_models_on/
false
false
self
1
null
Polish is the most effective language for prompting AI, study reveals
447
2025-11-02T21:02:40
https://www.euronews.com/next/2025/11/01/polish-to-be-the-most-effective-language-for-prompting-ai-new-study-reveals
tengo_harambe
euronews.com
1970-01-01T00:00:00
0
{}
1omst7q
false
null
t3_1omst7q
/r/LocalLLaMA/comments/1omst7q/polish_is_the_most_effective_language_for/
false
false
default
447
{'enabled': False, 'images': [{'id': 'HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?width=108&crop=smart&auto=webp&s=652cf14ee3e80736cb9beb583f03daea669dca05', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?width=216&crop=smart&auto=webp&s=5e9bffc7d12cfdfabe0e45ea140064f384cbc777', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?width=320&crop=smart&auto=webp&s=11eb81ad31afdf47e04bd5bed3d80e7f0220a2ac', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?width=640&crop=smart&auto=webp&s=9e82f2cea2a7637a951d648c1b316bc8d9248d9c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?width=960&crop=smart&auto=webp&s=36940559104a974b5ccd22577b34b8a512fcc6a8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?width=1080&crop=smart&auto=webp&s=377b957f839f23e7801478eb88183cad8445a74a', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/HLkT8hEFTM_i4ECT9hWztFsoDf9RouYG6ZWJzAhQOUY.jpeg?auto=webp&s=18624b9a681a283f20b9ab1b6b607a965bf5f02e', 'width': 1200}, 'variants': {}}]}
LM Studio - Recent guidance on replacing llama.cpp?
1
[removed]
2025-11-02T21:02:12
https://www.reddit.com/r/LocalLLaMA/comments/1omsss9/lm_studio_recent_guidance_on_replacing_llamacpp/
MarqueeInsights
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omsss9
false
null
t3_1omsss9
/r/LocalLLaMA/comments/1omsss9/lm_studio_recent_guidance_on_replacing_llamacpp/
false
false
self
1
null
Polish to be the most effective language for prompting AI, new study reveals
1
[deleted]
2025-11-02T21:01:44
[deleted]
1970-01-01T00:00:00
0
{}
1omsscn
false
null
t3_1omsscn
/r/LocalLLaMA/comments/1omsscn/polish_to_be_the_most_effective_language_for/
false
false
default
1
null
The Zero Freeze Formula: Teaching Local LLaMA Real Physics Through Python (SU(3) Mass Gap Simulation) to solve the Yang–Mills Mass Gap
0
The Zero Freeze Formula: Teaching Local LLaMA Real Physics Through Python (SU(3) Mass Gap Simulation) to solve the Yang–Mills Mass Gap TL;DR We taught LLaMA how to solve a mass gap. It ran the Hamiltonian, stabilized it, and learned from it. Now you can too -- locally. Zero Freeze Formula + Local LLaMA = AI-assisted Physics Lab. \>>>New Model / Open Release The Zero Freeze update takes the symbolic logic roots of Zero-Ology / Void-Math OS and turns them into something physical -- a full, working SU(3) Hamiltonian mass-gap simulator that can now train local LLaMA models (Ollama, Phi, Mistral, LLaMA 2 or add more ai API) on how to reason through and compute confinement energy in quantum fields. Zero\_Freeze\_Hamiltonian\_Lattice\_Gauge\_Benchmark\_Suite.py A compact open-source Python system that builds and diagonalizes a real SU(3) gauge Hamiltonian directly on your machine. It measures the energy gap for lattice sizes L = 4, 8, 16 … proving a stable, non-zero mass gap -- the hallmark of confinement in Yang–Mills theory. But here’s the new part: \> You can now feed this script into your local LLaMA environment. \> The model learns the physics workflow -- not just the output. \> Then it helps teach other Ollama models the same reasoning steps through Zero-Ology logic and Void-Math OS introspection operators. It’s a bridge between symbolic cognition and quantum simulation. Run the zer00logy\_coreV04461.py Python script under your local LLaMA or Ollama console - Type !@0ko@!/Zero\_Freeze\_Yang\_Mills\_Formula To Prompt - Type !@0ko@!/Zero\_Freeze\_Hamiltonian\_Lattice\_Gauge\_Benchmark\_Suite To Run Python Script. The model reads the lattice-building and solver code line-by-line, forming an internal symbolic map of: Hermiticity checks Eigenvalue stability (Δvals) Iterative solver convergence Additionally - Using Void-Math operators (⊗, Ω, Ψ), LLaMA learns to reason recursively about numerical stability and symbolic collapse -- effectively “thinking in Hamiltonians.” Once trained, you can use [GroupChatForge.py](http://GroupChatForge.py) to launch multi-user simulated labs, where several humans (or AIs) co-edit a physics prompt together before sending it to the local model for evaluation. ( Beta Example ) Now your local AI becomes part of a collaborative physics experiment, sharing symbolic and numerical reasoning with other models (Phi, Mistral, Llama, Gemini, ChatGPT, Grok, Copilot etc). How It Works Builds a real SU(3) Hamiltonian from 3×3 Gell-Mann matrices. Uses deterministic sparse diagonalization (no Monte Carlo noise). Includes self-healing solver fallback for numerical stability. Verifies physics conditions automatically: Hermiticity Eigenvalue normalization Δvals stability Mass gap persistence All done on a CPU laptop — no GPU, no supercomputer. The vacuum stayed stable. The mass gap stayed positive. Open Source Repository GitHub: Zero-Ology/Zero\_Freeze\_Hamiltonian\_Lattice\_Gauge\_Benchmark\_Suite.py at main · haha8888haha8888/Zero-Ology (mirrored with Zer00logy ecosystem) Includes: Full Python script -- Zero\_Freeze\_Hamiltonian\_Lattice\_Gauge\_Benchmark\_Suite.py Eigenvalue logs from prototype runs Annotated paper draft (plaintext + LaTeX) Verification utilities for is\_hermitian, solver diagnostics, and stability checks. The mass gap problem defines why quantum fields in the strong force are confined. A positive Δm means: the vacuum resists excitation. Matter is bound. Energy “freezes” into mass. That’s why this model is called Zero Freeze — it’s where zero isn’t empty… it’s frozen potential. Credits Author: Stacey Szmy Co-Authors: OpenAIChatGPT, Microsoft Copilot Special Thanks: OpenAI, Meta, Microsoft, and the open science community. License: Zero-Ology License 1.15 Core Formula — The Zero Freeze Mass Gap Relation Let HHH be the lattice Hamiltonian for a compact gauge group G=SU(3)G = SU(3)G=SU(3), acting on a finite 2D lattice of size LLL. We compute its spectrum: Then define the mass gap as: where: E0E\_0E0​ is the ground state energy (the vacuum), E1E\_1E1​ is the first excited energy (the lightest glueball or excitation). Existence Condition For a confining quantum gauge field (such as SU(3)): That means the energy spectrum is gapped, and the vacuum is stable. Lattice Limit Relation In the continuum limit as the lattice spacing a→0a \\to 0a→0, This mphysm\_{\\text{phys}}mphys​ is the physical mass gap, the minimal excitation energy above the vacuum. Numerical Implementation (as in your Python suite) Where: UUU = SU(3) link operator (built from Gell-Mann matrices), EEE = corresponding conjugate electric field operator, α,β\\alpha, \\betaα,β are coupling constants normalized for each prototype mode, ϵ\\epsilonϵ ≈ numerical tolerance (∼10⁻³–10⁻⁴ in tests). Observed Prototype Result (empirical validation) Lattice Size (L) Δm (Observed) Stability (Δvals) 4 0.00456 2.1×10⁻³ 8 \~0.002xx stable 16 \~0.001x consistent Confirms: Interpretation Δm>0\\Delta m > 0Δm>0: The quantum vacuum resists excitation → confinement. Δm=0\\Delta m = 0Δm=0: The system is massless → unconfined. Observed behavior matches theoretical expectations for SU(3) confinement. Obviously without a supercomputer you only get so close :D haha, it wont proof im sure of that but >> it could become ... A validated numerical prototype demonstrating non-zero spectral gaps in a Real SU(3) operator --supporting the confinement hypothesis and establishing a reproducible benchmark for future computational gauge theory studies ;) :) \>>LOG: === GRAND SUMMARY (Timestamp: 2025-11-02 15:01:29) === L=4 Raw SU(3) Original: mass\_gap: 0.006736878563294524 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-1.00088039 -0.99414351 -0.98984368 -0.98193738 -0.95305459 -0.95303209 \-0.95146243 -0.94802272 -0.94161539 -0.93038092 -0.92989319 -0.92457688 \-0.92118877 -0.90848878 -0.90164848 -0.88453912 -0.87166522 -0.87054661 \-0.85799109 -0.84392243\] L=4 Gauge-Fixed SU(3) Original: mass\_gap: 0.006736878563295523 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-1.00088039 -0.99414351 -0.98984368 -0.98193738 -0.95305459 -0.95303209 \-0.95146243 -0.94802272 -0.94161539 -0.93038092 -0.92989319 -0.92457688 \-0.92118877 -0.90848878 -0.90164848 -0.88453912 -0.87166522 -0.87054661 \-0.85799109 -0.84392243\] L=4 Raw SU(3) Boosted: mass\_gap: 0.00673687856329408 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-0.90088039 -0.89414351 -0.88984368 -0.88193738 -0.85305459 -0.85303209 \-0.85146243 -0.84802272 -0.84161539 -0.83038092 -0.82989319 -0.82457688 \-0.82118877 -0.80848878 -0.80164848 -0.78453912 -0.77166522 -0.77054661 \-0.75799109 -0.74392243\] L=4 Gauge-Fixed SU(3) Boosted: mass\_gap: 0.00673687856329519 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-0.90088039 -0.89414351 -0.88984368 -0.88193738 -0.85305459 -0.85303209 \-0.85146243 -0.84802272 -0.84161539 -0.83038092 -0.82989319 -0.82457688 \-0.82118877 -0.80848878 -0.80164848 -0.78453912 -0.77166522 -0.77054661 \-0.75799109 -0.74392243\] L=8 Raw SU(3) Original: mass\_gap: 0.0019257741216218704 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-1.03473039 -1.03280462 -1.02160111 -1.00632093 -1.00304064 -1.00122621 \-1.00098544 -1.00063794 -0.99964038 -0.99941845 -0.99934453 -0.99862362\] L=8 Gauge-Fixed SU(3) Original: mass\_gap: 0.0019257741216216484 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-1.03473039 -1.03280462 -1.02160111 -1.00632093 -1.00304064 -1.00122621 \-1.00098544 -1.00063794 -0.99964038 -0.99941845 -0.99934453 -0.99862358\] L=8 Raw SU(3) Boosted: mass\_gap: 0.0019257741216203161 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-0.93473039 -0.93280462 -0.92160111 -0.90632093 -0.90304064 -0.90122621 \-0.90098544 -0.90063794 -0.89964038 -0.89941845 -0.89934452 -0.89862352\] L=8 Gauge-Fixed SU(3) Boosted: mass\_gap: 0.0019257741216218704 hermitian: True normalized: False discrete\_gap: False prototype: True notes: Discrete gap issue; Eigenvalues: \[-0.93473039 -0.93280462 -0.92160111 -0.90632093 -0.90304064 -0.90122621 \-0.90098544 -0.90063794 -0.89964038 -0.89941845 -0.89934453 -0.89862362\] L=16 Raw SU(3) Original: mass\_gap: 0.0013967382831825415 hermitian: True normalized: False discrete\_gap: True prototype: True notes: Eigenvalues: \[-1.03700802 -1.03561128 -1.03520171 -1.03376882 -1.03152725 -1.02816263 \-1.027515 -1.02575789 -1.02407356 -1.02134187 -1.01827701 -1.0173832 \] L=16 Gauge-Fixed SU(3) Original: mass\_gap: 0.0013967382831823194 hermitian: True normalized: False discrete\_gap: True prototype: True notes: Eigenvalues: \[-1.03700802 -1.03561128 -1.03520171 -1.03376882 -1.03152725 -1.02816263 \-1.027515 -1.02575789 -1.02407356 -1.02134187 -1.018277 -1.01736196\] L=16 Raw SU(3) Boosted: mass\_gap: 0.0013967382831825415 hermitian: True normalized: False discrete\_gap: True prototype: True notes: Eigenvalues: \[-0.93700802 -0.93561128 -0.93520171 -0.93376882 -0.93152725 -0.92816263 \-0.927515 -0.92575789 -0.92407356 -0.92134187 -0.91827705 -0.91738514\] L=16 Gauge-Fixed SU(3) Boosted: mass\_gap: 0.0013967382831818753 hermitian: True normalized: False discrete\_gap: True prototype: True notes: Eigenvalues: \[-0.93700802 -0.93561128 -0.93520171 -0.93376882 -0.93152725 -0.92816263 \-0.927515 -0.92575789 -0.92407356 -0.92134187 -0.91827694 -0.91737801\] === Suggested optimized ranges based on this run === Tolerance used: 1e-10 Max iterations used: 300 All lattices complete in 79.4s. Millennium Prize Mode: ENGAGED 🏆 Export Options: 1: Save as CSV 2: Save as JSON 3: Save as CSV + JSON Enter your choice (or press Enter to skip export): Made by: Stacey Szmy, OpenAI ChatGPT, Microsoft Copilot. Script: Zero\_Freeze\_Hamiltonian\_Lattice\_Gauge\_Benchmark\_Suite.py License: Zero-Ology v1.15 [zero-ology \/ zer00logy](https://preview.redd.it/5jkpp743nwyf1.png?width=647&format=png&auto=webp&s=ad12cdd9671c96bd12dd6c7116d33054ca5ae1f0)
2025-11-02T20:37:29
https://www.reddit.com/r/LocalLLaMA/comments/1oms615/the_zero_freeze_formula_teaching_local_llama_real/
zero_moo-s
self.LocalLLaMA
2025-11-02T20:56:11
0
{}
1oms615
false
null
t3_1oms615
/r/LocalLLaMA/comments/1oms615/the_zero_freeze_formula_teaching_local_llama_real/
false
false
https://b.thumbs.redditm…D-tO11s3Zxtg.jpg
0
null
I’ve been working on an app called Magic Tales: Bedtime Stories that helps parents create magical bedtime moments for their kids. | On Device Using Local LLM
1
[Magic Tales – Bedtime Stories](https://apps.apple.com/us/app/magic-tales-bedtime-stories/id6751983981) You just pick a theme and a character, and the app instantly generates a **unique bedtime story,** complete with beautiful text and AI-crafted images. Each story feels personal, fun, and comforting for kids before sleep. What makes it special: • **Powered by Apple Intelligence**: all story generation happens **on-device**, keeping everything 100% private and safe • **Optimized for iOS 26** for smooth, fast performance • Lets parents create **personalized stories in seconds**, turning bedtime into a creative bonding time If you’re a parent, educator, or just someone who loves storytelling, I’d love your feedback or ideas for themes and features to add next!
2025-11-02T20:33:56
https://i.redd.it/xg36la18mwyf1.png
Late-Scarcity-5476
i.redd.it
1970-01-01T00:00:00
0
{}
1oms2qy
false
null
t3_1oms2qy
/r/LocalLLaMA/comments/1oms2qy/ive_been_working_on_an_app_called_magic_tales/
false
false
https://b.thumbs.redditm…sSl9Xy9FfurY.jpg
1
{'enabled': True, 'images': [{'id': 'FtesEHggNOA_YWRGhv7ic4s77VJaOrOUHARsiQbnBVY', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?width=108&crop=smart&auto=webp&s=653209605fbbd4443aaba940c1c89e1f07ad106e', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?width=216&crop=smart&auto=webp&s=faa922c9305985bfe580b56e96a0ea8eda133218', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?width=320&crop=smart&auto=webp&s=48f825445528d039066691e9a1bcb6d58021985f', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?width=640&crop=smart&auto=webp&s=c6881920145487a8eb4416308786c6b165507934', 'width': 640}, {'height': 685, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?width=960&crop=smart&auto=webp&s=1d76ad883714a958a30487ef2e081d2b6bed9ea3', 'width': 960}, {'height': 771, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?width=1080&crop=smart&auto=webp&s=9c752cbfc1ee9e0e0d7d4b0dfff9dedee4d01d46', 'width': 1080}], 'source': {'height': 4521, 'url': 'https://preview.redd.it/xg36la18mwyf1.png?auto=webp&s=2b3884a5ac6d9df81d815edd1fd6322897372017', 'width': 6328}, 'variants': {}}]}
I am in the proccess of making this tool
0
[removed]
2025-11-02T20:29:19
https://www.reddit.com/r/LocalLLaMA/comments/1omryhq/i_am_in_the_proccess_of_making_this_tool/
Appropriate-Age-798
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omryhq
false
null
t3_1omryhq
/r/LocalLLaMA/comments/1omryhq/i_am_in_the_proccess_of_making_this_tool/
false
false
self
0
{'enabled': False, 'images': [{'id': 'F6B91u8_h2p6PW-DxDxBGQBMNoPuXtwYLxlS2Nws7tU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/F6B91u8_h2p6PW-DxDxBGQBMNoPuXtwYLxlS2Nws7tU.jpeg?width=108&crop=smart&auto=webp&s=d293c391c16ee5e85d629202b471812849994133', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/F6B91u8_h2p6PW-DxDxBGQBMNoPuXtwYLxlS2Nws7tU.jpeg?width=216&crop=smart&auto=webp&s=ef7aa9354ad8370233df52c83306bcbc8fc36f43', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/F6B91u8_h2p6PW-DxDxBGQBMNoPuXtwYLxlS2Nws7tU.jpeg?width=320&crop=smart&auto=webp&s=b14c5f2e4da7ce3df36585cdc0692c48e8f564a3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/F6B91u8_h2p6PW-DxDxBGQBMNoPuXtwYLxlS2Nws7tU.jpeg?auto=webp&s=0bd5c47c58332ac7e3e3d801838516580b3a4349', 'width': 480}, 'variants': {}}]}
Is there a resource listing workstation builds for different budgets (for local model training/inference)?
1
’m trying to figure out what kind of workstation makes sense for running and maybe fine-tuning models locally. Does anyone know of a **current list or guide** that suggests hardware setups (CPU, GPU, RAM, etc.) for different **budget levels** — say, around €2K, €3K, €5K? Also, how do people here feel about the **Mac Studio M3 Ultra** as an option? I know it doesn’t support CUDA, but the unified memory and efficiency look appealing — curious if anyone’s made it work for local LLMs or vision models. Would love to hear about your own setups and what’s working well for you!
2025-11-02T20:17:44
https://www.reddit.com/r/LocalLLaMA/comments/1omro24/is_there_a_resource_listing_workstation_builds/
valkiii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omro24
false
null
t3_1omro24
/r/LocalLLaMA/comments/1omro24/is_there_a_resource_listing_workstation_builds/
false
false
self
1
null
Any reasoning models that are small (under 500 million) that can be used to study transactions?
3
Hello friends, I'm looking for small reasoning models (under 500 million parameters) that can analyze transactions. I'm working on a fraud detection task and want to use 2-3 small models. I'd give each one a subtask from the problem statement, where one handles part of it, creates an intermediate result, and passes it to the next, a pipeline. For example, one could detect anomalies, and another could provide summaries. The output needs to be structured JSON. Any suggestions? Something that could run on a good CPU.
2025-11-02T20:06:54
https://www.reddit.com/r/LocalLLaMA/comments/1omrdq5/any_reasoning_models_that_are_small_under_500/
Fodz1911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omrdq5
false
null
t3_1omrdq5
/r/LocalLLaMA/comments/1omrdq5/any_reasoning_models_that_are_small_under_500/
false
false
self
3
null
Qwen3 VL 30b a3b is pure love
264
Its been a bit since that model is available as GGUF and can be used with llama.cpp. A quick test using OpenWebUI showed its pretty fast on a 3060 12G with the Experts on the CPU. It takes only about 3.5 sec to process high quality phone images and generates responses with 30 t/s. While taking only 8 gb of VRAM. Im using Unsloths q8 with mmproj-F32 file. The model is so good that i actually continued to work on a project that i have left off for a couple of months, as i couldnt get models from OpenRouter to work reliably, as well as Googles Models via their API. Well those models reliably extracted the data that i needed, but somehow i did not manage to get good boxes or single point coordinates from them. And what am I supposed to say? Qwen3 VL 30b a3b simply nails it. The whole thing works exactly the way I imagined it. I got really inspired to get back to this project and get it finally finished. As my programming skills are kinda meh, i turned on the vibecoding machine and played around. But now i can proudly present my new tool to create inventory lists from images. Probably nothing special for many of you, but its the only useful thing I have done with AI so far. Therefore im really happy. Enjoy this demo, where i setup a project, define the data that i need from the images and that is important for my inventory. Then take a couple of images from object front and back and then review the extracted data, check if its correct and then feed it into the inventory table. The Video is 2.5x sped up. will share the project as a easily deployable docker container once i got the codebase a little bit tidied up, shouldnt be too much of work. Some stats: The full precision mmproj and q8 of the LLM need about 7 seconds to encode 2 images (on the 3060). So it takes 7 seconds to understand the front and the back of my object. It then needs 10 seconds to output json with the extracted data and the coordinates for 4 table columns. 4 columns of the table = 300 tokens. At 30t/s it takes 10 seconds. In total this is less than 20 seconds per container. And i am really looking forward to build up some nice inventory lists from whatever i need listed. [2.5x sped up. ](https://reddit.com/link/1omr9rc/video/wm6ts19kgwyf1/player)
2025-11-02T20:02:36
https://www.reddit.com/r/LocalLLaMA/comments/1omr9rc/qwen3_vl_30b_a3b_is_pure_love/
Njee_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omr9rc
false
null
t3_1omr9rc
/r/LocalLLaMA/comments/1omr9rc/qwen3_vl_30b_a3b_is_pure_love/
false
false
self
264
null
Road to logical thinking, monkey Idea?
0
To me: I actively started learing about llms and machine learning in September 2023 and i am what u once called a Skript kiddie, but nowadays its with docker containers, and i really love the Open source world, because you get a very quick grasp of what is possible right now. Since then i stumbled upon some very fun to read papers. I have No deeper knowledge, but what i see is, that we have those 16bit models, that can be quantized down to 4 bit and be reasonably compareable.so the 16 bit model as i understand is filled with those ml artifacts, and you would just need to get some mathmatical logic in those random monkey produced prompt Tokens. Now right now we have the halucination of logical thinking in llms, where just rubbing logical training Data in the training process like u jerk parts of the body and hope Something Sticks. Now what if we used the remaining precision Up to 16bit to implement some sort of intregrated graph rag to give a token some sort of meta context that would be maybe abstract enough for some mathmatical logic to grasp and follow through? I know, foolish, but maybe someone smarter than me knows much more about that and has the time to tell me, why its not possible, not possible right now.. or that its actually already done like that
2025-11-02T20:01:00
https://www.reddit.com/r/LocalLLaMA/comments/1omr89o/road_to_logical_thinking_monkey_idea/
Sea_Calendar_3912
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omr89o
false
null
t3_1omr89o
/r/LocalLLaMA/comments/1omr89o/road_to_logical_thinking_monkey_idea/
false
false
self
0
null
MacOS automate spin up & spin down of llm dependant upon request?
0
Hi folks. I've been experimenting with running some local models and enjoying the process. I'm generally agnostic to using ollama, lmstudio etc.. I'm wondering if there is a way in which I could spin up and sping down an llm automatically? Say for example. I have an instance of n8n which currently connects to lm studio. Would it be possible on an instance where n8n sends its query to my mac studio (llm) for the mac to load the model, do it's thing and spin down the model at all? I currently use my mac for a load of video editing and media creation, so I often reach the upper end of the ram usage before loading any llm models. My intent is to spin llm instances up during my non-working hours when system resources are generally freed up from rending and day to day work. Understand that this may be slightly out of the remit of the sub, but worth asking. Many thanks.
2025-11-02T19:44:37
https://www.reddit.com/r/LocalLLaMA/comments/1omqsyl/macos_automate_spin_up_spin_down_of_llm_dependant/
ScottAMains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omqsyl
false
null
t3_1omqsyl
/r/LocalLLaMA/comments/1omqsyl/macos_automate_spin_up_spin_down_of_llm_dependant/
false
false
self
0
null
Devstral-small-2505 crashing on LM studio
0
Hi, i just started using devstral with lm studio, trying to get some use out of my 3090 GPU anf 64GB Sys RAM. it worked quite well, even better than the qwen30b coder instruct, but on multiple occasions, it seems to crash with this error message >The model has crashed without additional information. (Exit code: 18446744072635812000). Error Data: n/a, Additional Data: n/a The task itself is a simple, create reactJs hook and import it into another file and i am using opencode for it. i am running : * CUDA as backend * KV cache quantization to Q8 * CPU offloading of 8 layers (out of 40) * the model is from the [LM studio community ](https://huggingface.co/lmstudio-community/Devstral-Small-2505-GGUF) not sure what the problem is, but the issue is consistent.
2025-11-02T19:39:06
https://www.reddit.com/r/LocalLLaMA/comments/1omqnrz/devstralsmall2505_crashing_on_lm_studio/
UniqueAttourney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omqnrz
false
null
t3_1omqnrz
/r/LocalLLaMA/comments/1omqnrz/devstralsmall2505_crashing_on_lm_studio/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?width=108&crop=smart&auto=webp&s=b3efa2b86763136f340395fe3306795139913598', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?width=216&crop=smart&auto=webp&s=e8cad98de5716d87d113d2f3a60ca1a7a404b75b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?width=320&crop=smart&auto=webp&s=3788cb67a30d5a5b5406e02d7c1444563e4fa4e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?width=640&crop=smart&auto=webp&s=fab7fb81087f99909b14b92a47698926f07492aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?width=960&crop=smart&auto=webp&s=e88e3b60393f6add4f1600c863c93675fb67f786', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?width=1080&crop=smart&auto=webp&s=8c287f0be0ec083582e0b89aafe28ec69f677795', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hT8mTApqEjSu9qGi5eE4dUQ86Wp-exFT6za5jTaD2zc.png?auto=webp&s=669cd049d89843d25a89c60896394a1dfe469142', 'width': 1200}, 'variants': {}}]}
Struggling to get the uncensored models work
0
https://preview.redd.it/…and other things
2025-11-02T19:24:55
https://www.reddit.com/r/LocalLLaMA/comments/1omqath/struggling_to_get_the_uncensored_models_work/
udt007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omqath
false
null
t3_1omqath
/r/LocalLLaMA/comments/1omqath/struggling_to_get_the_uncensored_models_work/
false
false
https://b.thumbs.redditm…xRFbJA3RzRWQ.jpg
0
null
Vision = Language: I Decoded VLM Tokens to See What AI 'Sees' 🔬
238
I've spent a lot of time learning how language models work, but images obviously aren't language – so how is it possible for AI to understand an image? I studied Gemma 3 to learn about how modern vision language models work. **The core finding:** Vision language models are just language models that learned to "speak image". Images get encoded as tokens in linguistic space, and then the language model processes them identically to text. So, if visual information gets translated into linguistic space, can we interpret the image tokens by mapping them to vocabulary space? I built an unembedding technique to answer that question and analyze what semantic information is encoded in the image tokens. # Background: How VLMs Work Here's a diagram I created for my video that I think is helpful: https://preview.redd.it/rk2m9rsk5wyf1.png?width=960&format=png&auto=webp&s=1149132fb1b0148c3684a54a14bcb3a7f84cb8ae As you can see, there are two pieces: the vision tower + a standard language model. The vision tower is quite literally bolted on to a normal language model. For Gemma 3 specifically, the data flow is: 1. Preprocessing: Convert image → 3 × 896 × 896 pixels 2. Vision transformer: Process pixels → 4,096 image tokens 3. Multimodal projector: Compress 4,096 tokens → 256 tokens (semantically meaningful in language model's d\_model space) 4. Language model: Image tokens and text tokens processed identically The brilliance is the multimodal projector – it translates visual information into linguistic space. # Method: Unembedding Image Tokens **Validation:** First, I validated the technique with text tokens. By taking a token embedding and passing it directly through the language head (bypassing the transformer layers), I could recover the original token with 100% accuracy. This proves that unembedding works for linguistic tokens. **Applying to images:** The same technique can be applied to image tokens: Image → Vision Tower → Multimodal Projector → 256 image tokens → Unembed each token This is greedy unembedding – finding the nearest vocabulary token to any embedding vector. Since this is a nearest neighbor approach, it's lossy. The reality is that image tokens live in linguistic space but don't necessarily map exactly to a single vocabulary token. An image token can exist between different vocabulary words in the embedding space. |Token Type|Embedding Space Behavior| |:-|:-| |Text tokens|Map 1:1 to a place in embedding space – each token in the vocabulary has exactly 1 vector representation| |Image tokens|Have vector representations that seem to exist *between* text tokens| # What I Found Here's what the unembedding revealed for different image types (see the linked notebook for more): **Purple square (monocolor):** The model correctly identifies the dominant color https://preview.redd.it/l2c7hko55wyf1.png?width=470&format=png&auto=webp&s=2ffdc04268e03edea1c1ec69bb18ac3b2fbc703e **Mountain scene (sunrise over mountains):** Rich semantic encoding: proper nouns, landscape features, time of day https://preview.redd.it/eeq8zw075wyf1.png?width=525&format=png&auto=webp&s=454981867d6106775ab90668ba28f022b257d722 **Key observations** * *The " the" phenomenon:* Across all image types, a large percentage of tokens map to " the". Since " the" is usually the most common token in training data, it likely occupies a central location in embedding space. This might reveal either that not all image tokens are informative, or it might expose a limitation of greedy unembedding: when image tokens don't map cleanly to a single vocabulary word, the nearest neighbor defaults to the most "central" token – there may be information encoded that greedy nearest-neighbor decoding can't reveal. * *Semantic emergence:* Even with the "the" dominance, semantically meaningful tokens do emerge – colors, landscape features, proper nouns. The language model's understanding of images is messy, but there's signal in the noise. # Implications & Open Questions **Implication: The 256-Token Bottleneck: Feature, Not Flaw?** The multimodal projector compresses 4,096 visual patches down to 256 tokens. At first, this seemed like a clear limitation – you're losing information in that compression. There is only so much that can be encoded in 256 tokens, right? There has been some buzz recently about the DeepSeek-OCR paper and how image tokens can be used as a form of compression. This got me thinking about the 256-token budget differently. Remember that image tokens exist *between* text tokens in embedding space. A text token maps 1:1 to exactly one vocabulary word. But an image token isn't constrained to discrete vocabulary positions – it can exist anywhere in the continuous embedding space between multiple words. This means a single image token can simultaneously encode aspects of multiple concepts. In other words, *image tokens have higher information density than text tokens.* Each of the 256 image tokens can encode more nuanced information than a discrete text token could. This reframes the 256-token "bottleneck" – maybe it's not a limitation but an efficient compression that can be exploited. **Open Question: Positional Encoding: Distributed or Discrete?** Someone asked me recently how positional information in an image gets encoded in the vision tokens. I don't have a good answer, but I think it's a really interesting question. Positional information is obviously encoded somewhere, but where? Is it very distributed across the 256? Or are there specific token positions that effectively act as positional experts? How is information encoded across the 256 token budget? * *1 giant pool* (each token plays a small role in constructing what appears as an aggregate meaning when looking at all 256) OR * *256 smaller pools* (each token is more of a specialist, i.e., the 0th position vision token serves a different function than the 255th) My gut tells me the 1 giant pool idea seems more likely to me. But, as I've learned with VLMs, the reality is probably somewhere in the middle, and quite messy and hard to study! But I bet there is some cool stuff to discover with more sophisticated techniques. # Want to Explore More? * [**"Dissecting Vision Language Models: How AI Sees"**](https://youtu.be/NpWP-hOq6II?si=Qun_EsWq7LLQ4ugw) – My 20-min video walkthrough going deeper into VLM architecture and the unembedding technique * [**GitHub repo with notebook**](https://github.com/jacob-danner/dissecting-vlm/blob/main/dissecting_vlm.ipynb) – Clone the repo and try unembedding your own images to see what the model "sees" in linguistic space * [**Teaching AI to See: A Technical Deep-Dive on Vision Language Models with Will Hardman of Veratai**](https://www.youtube.com/watch?v=mLgyZ5GauhM&t=6s) – *Cognitive Revolution* podcast episode that's an excellent comprehensive map of the VLM landscape I think vision language models are super fascinating, especially on the mechanistic interpretability side trying to understand what those image tokens actually represent. Let me know what you discover!
2025-11-02T19:08:52
https://www.reddit.com/r/LocalLLaMA/comments/1ompw8z/vision_language_i_decoded_vlm_tokens_to_see_what/
ComputeVoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ompw8z
false
null
t3_1ompw8z
/r/LocalLLaMA/comments/1ompw8z/vision_language_i_decoded_vlm_tokens_to_see_what/
false
false
https://b.thumbs.redditm…OiBZ9JWlIHpw.jpg
238
null
Youtube channels about Local LLaMA
1
Good evening, Hope you doing well, I watched as many of us here the new PewDiePie video. Loved it found it so interesting and I could understand 70% of what he was saying. Quick question : came to my mind, is there any other youtubers that does that type of entertaining videos ? Just looking to get more curious about it. As I don't have the time / knowledge / money to start my own LLM. Thank's !:
2025-11-02T19:01:51
https://www.reddit.com/r/LocalLLaMA/comments/1omppq7/youtube_channels_about_local_llama/
LopsidedHat9138
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omppq7
false
null
t3_1omppq7
/r/LocalLLaMA/comments/1omppq7/youtube_channels_about_local_llama/
false
false
self
1
null
Is this setup possible?
2
I am thinking of buying six rtx 5060 ti 16gb VRAM so I get a total of 96 gb VRAM. I want to run AI to use locally in cursor IDE. Is this a good idea or are there better options I can do? Please let me know 🙏
2025-11-02T18:55:52
https://www.reddit.com/r/LocalLLaMA/comments/1ompk5z/is_this_setup_possible/
Disastrous_Egg7778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ompk5z
false
null
t3_1ompk5z
/r/LocalLLaMA/comments/1ompk5z/is_this_setup_possible/
false
false
self
2
null
Best budget inference LLM stack
1
Hey guys! I want to have a local llm inference machine that can run anything like gpt-oss-120b My budget is $4000 and I prefer as small as possible (don’t have a space for 2 huge gpu)
2025-11-02T18:46:55
https://www.reddit.com/r/LocalLLaMA/comments/1ompc2d/best_budget_inference_llm_stack/
gostt7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ompc2d
false
null
t3_1ompc2d
/r/LocalLLaMA/comments/1ompc2d/best_budget_inference_llm_stack/
false
false
self
1
null
I built a small DSL to generate roleplay datasets for LoRA fine‑tuning my local models
9
I’m fine‑tuning models for local use and kept fighting ad‑hoc scripts/JSON to make datasets—especially for multi‑turn **roleplay** chats. I ended up writing **Torque**, a declarative (fully typesafe) DSL where I describe the conversation flow once and it generates varied examples with deterministic seeds. It’s provider‑agnostic, and the output is plain **JSONL**, so I can synthesize with cloud or local stacks (vLLM, LLaMA.cpp) and feed it straight into my LoRA pipeline. Tiny example (roleplay flavor): ```typescript import { generateDataset, generatedUser, generatedAssistant, faker } from "@qforge/torque"; import { openai } from "@ai-sdk/openai"; await generateDataset( () => [ generatedUser({ prompt: `Start a roleplay as ${faker.person.fullName()}, a seasoned starship engineer. Open with a short in‑character line.` }), generatedAssistant({ prompt: "Reply in character and keep the scene going in 1–2 sentences." }), // you can put as many messages as you'd like ], { count: 500, model: openai("gpt-5-mini"), // or point your provider at vLLM / LLaMA.cpp output: "data/roleplay.jsonl", seed: 42 } ); ``` Repo (MIT): [https://github.com/qforge-dev/torque](https://github.com/qforge-dev/torque) If you have ideas for useful roleplay templates (fantasy, cyberpunk, therapist, detective, etc.), I’m all ears.
2025-11-02T18:21:29
https://github.com/qforge-dev/torque
fajfas3
github.com
1970-01-01T00:00:00
0
{}
1omoojj
false
null
t3_1omoojj
/r/LocalLLaMA/comments/1omoojj/i_built_a_small_dsl_to_generate_roleplay_datasets/
false
false
https://external-preview…1018a9b30624bb26
9
{'enabled': False, 'images': [{'id': 'z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?width=108&crop=smart&auto=webp&s=8890858f623f763ff7e469c249e5638ee7af3076', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?width=216&crop=smart&auto=webp&s=c61d211e2bca2eca5bb008763f6fcd5bf6436f01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?width=320&crop=smart&auto=webp&s=62b0f5639f013cde1ba6f90ee6e19eafadc782f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?width=640&crop=smart&auto=webp&s=fe3e58a36fbae07494ddcedac16fb6f1a968eb7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?width=960&crop=smart&auto=webp&s=db5a977f61d51966e5d67db0a04f42eb37147611', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?width=1080&crop=smart&auto=webp&s=a63098418524d7cc36118b5cf7b313599aed283e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z12rhz6t5cpG06PqNh1zwiXMorUUFuMH8McBafW9sVQ.png?auto=webp&s=cfd5fedc9a39c93b5f0b919a3f6667f2b9116948', 'width': 1200}, 'variants': {}}]}
Which model do you wish could run locally but still can’t?
13
Hi everyone! Alan from Nexa here. A lot of folks here have asked us to make certain models run locally — Qwen3-VL was one of them, and we actually got it running before anyone else ([proof](https://x.com/Alibaba_Qwen/status/1978154384098754943)). To make that process open instead of random, we built a small public page called **Wishlist**. If there’s a model you want to see supported (GGUF, MLX, on Qualcomm or Apple NPU), you can 1. Submit the Hugging Face repo ID 2. Pick the backends you want supported 3. We’ll do our best to bring the top ones fully on-device [Request model here](http://sdk.nexa.ai/wishlist) Curious what models this sub *still wishes* could run locally but haven’t seen supported yet.
2025-11-02T18:20:47
https://www.reddit.com/r/LocalLLaMA/comments/1omonvi/which_model_do_you_wish_could_run_locally_but/
AlanzhuLy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omonvi
false
null
t3_1omonvi
/r/LocalLLaMA/comments/1omonvi/which_model_do_you_wish_could_run_locally_but/
false
false
self
13
null
building a PC for dev/local AI/gaming. AMD or Intel?
2
hey all, im buying a new "main" pc for running models locally and other dev work (general coding and work in Unity), but will also be using it for gaming. im looking to get best performance possible. I know AMD is supposed to be the best for gaming, and honestly am unsure whether Intel is even worth considering at this point if I'm doing any gaming on the rig whatsoever. I'm currently looking at a 5090/9950X3D build, but does anyone know what the performance/price differences would be from Intel? would I have to pay an insane amount more to get the same all around performance? any help is greatly appreciated!
2025-11-02T17:48:50
https://www.reddit.com/r/LocalLLaMA/comments/1omnuk4/building_a_pc_for_devlocal_aigaming_amd_or_intel/
pale-horse1020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omnuk4
false
null
t3_1omnuk4
/r/LocalLLaMA/comments/1omnuk4/building_a_pc_for_devlocal_aigaming_amd_or_intel/
false
false
self
2
null
Secllama: Fork of Ollama with extra security in mind
0
[deleted]
2025-11-02T17:48:06
[deleted]
1970-01-01T00:00:00
0
{}
1omntuh
false
null
t3_1omntuh
/r/LocalLLaMA/comments/1omntuh/secllama_fork_of_ollama_with_extra_security_in/
false
false
default
0
null
Which model is well suited for LMStudio for windows
0
Hey folks, I’m new to this LLMs just getting into it. I wanted to try creating and building scalable pipelines using RAGs and other frameworks for specific set of applications. The problem is I’m using Windows AMD Ryzen 7 laptop with AMD Radeon Graphics 16GB memory and 1TB storage. Now I’ve installed OLLAMA initially but within two days of usage my laptop is getting slower while using it and so I uninstalled it and now trying with LM Studio, didn’t got any issues yet. So wanted to set it up now with models and I’m trying to find lower storage but efficient model for my specifications and requirements . Hope I’ll get some good suggestions of what I should install. Also, looking for some good ideas on where can I progress for LLMs as a Beginner now I want to change to Midlevel at-least. I know this is pretty low level question. But open for suggestions. Thanks in Advance!
2025-11-02T17:35:54
https://www.reddit.com/r/LocalLLaMA/comments/1omnica/which_model_is_well_suited_for_lmstudio_for/
Additional-Fun-9730
self.LocalLLaMA
2025-11-02T17:44:06
0
{}
1omnica
false
null
t3_1omnica
/r/LocalLLaMA/comments/1omnica/which_model_is_well_suited_for_lmstudio_for/
false
false
self
0
null
Offering to help run Colab Pro+ jobs for free this month
1
[removed]
2025-11-02T17:23:35
[deleted]
1970-01-01T00:00:00
0
{}
1omn6qy
false
null
t3_1omn6qy
/r/LocalLLaMA/comments/1omn6qy/offering_to_help_run_colab_pro_jobs_for_free_this/
false
false
default
1
null
I'm the author of LocalAI (the local OpenAI-compatible API). We just released v3.7.0 with full Agentic Support (tool use!), Qwen 3 VL, and the latest llama.cpp
72
Hey r/LocalLLaMA, I'm the creator of [LocalAI](https://github.com/mudler/LocalAI), and I'm stoked to share our v3.7.0 release. Many of you already use LocalAI as a self-hosted, OpenAI-compatible API frontend for your GGUF models (via `llama.cpp`), as well as other backends like `vLLM`, `MLX`, etc. It's 100% FOSS, runs on consumer hardware, and doesn't require a GPU. This new release is quite cool and I'm happy to share it out personally, so I hope you will like it. We've moved beyond just serving model inference and built a full-fledged platform for running local AI agents that can interact with external tools. Some of you might already know that as part of the LocalAI family, LocalAGI ( [https://github.com/mudler/LocalAGI](https://github.com/mudler/LocalAGI) ) provides a "wrapper" around LocalAI that enhances it for agentic workflows. Lately, I've been factoring out code out of it and created a specific framework based on it (https://github.com/mudler/cogito) that now is part of LocalAI as well. # What's New in 3.7.0 **1.** Full Agentic MCP Support (Build Tool-Using Agents) This is the big one. You can now build agents that can reason, plan, and use external tools... all 100% locally. Want your chatbot to search the web, execute a local script, or call an external API? Now it can. * **How it works:** It's built on our agentic framework. You just define "MCP servers" (e.g., a simple Docker container for DuckDuckGo) in your model's YAML config. No Python or extra coding is required. * **API & UI:** You can use the new OpenAI-compatible `/mcp/v1/chat/completions` endpoint, or just **toggle on "Agent MCP Mode"** right in the chat WebUI. * **Reliability:** We also fixed a *ton* of bugs and panics related to JSON schema and tool handling. Function-calling is now much more robust. * You can find more about this feature here: [https://localai.io/docs/features/mcp/](https://localai.io/docs/features/mcp/) **2.** Backend & Model Updates (Qwen 3 VL, llama.cpp) * `llama.cpp` **Updated:** We've updated our `llama.cpp` backend to the latest version. * **Qwen 3 VL Support:** This brings full support for the new **Qwen 3 VL multimodal models**. * `whisper.cpp` **CPU Variants:** If you've ever had LocalAI crash on older hardware (like a NAS or NUC) with an `illegal instruction` error, this is for you. We now ship specific `whisper.cpp` builds for `avx`, `avx2`, `avx512`, and a `fallback` to prevent these crashes. **3. Major WebUI Overhaul** This is a huge QoL win for power users. * The UI is much faster (moved from HTMX to Alpine.js/vanilla JS). * You can now view and edit the *entire* model YAML config directly in the WebUI. No more SSHing to tweak your context size, `n_gpu_layers`, `mmap`, or agent tool definitions. It's all right there. * **Fuzzy Search:** You can finally find `gemma` in the model gallery even if you type `gema`. **4. Other Cool Additions** * **New** `neutts` **TTS Backend:** For anyone building local voice assistants, this is a new, high-quality, low-latency TTS engine. * **Text-to-Video Endpoint:** We've added an *experimental* OpenAI-compatible `/v1/videos` endpoint for text-to-video generation. * **Realtime example:** we have added an example on how to build a voice-assistant based on LocalAI here: [https://github.com/mudler/LocalAI-examples/tree/main/realtime](https://github.com/mudler/LocalAI-examples/tree/main/realtime) it also supports Agentic mode, to show how you can control e.g. your home with your voice! As always, the project is 100% FOSS (MIT licensed), community-driven, and designed to run on *your* hardware. We have Docker images, single-binaries, and more. You can check out the full release notes [here](https://github.com/mudler/LocalAI/releases/tag/v3.7.0). I'll be hanging out in the comments to answer any questions! **GitHub Repo:** [https://github.com/mudler/LocalAI](https://github.com/mudler/LocalAI) Thanks for all the support!
2025-11-02T17:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1omn3t3/im_the_author_of_localai_the_local/
mudler_it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omn3t3
false
null
t3_1omn3t3
/r/LocalLLaMA/comments/1omn3t3/im_the_author_of_localai_the_local/
false
false
self
72
{'enabled': False, 'images': [{'id': 'itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?width=108&crop=smart&auto=webp&s=ff7a59078d26a35c7afef9dffa269f61951ea7f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?width=216&crop=smart&auto=webp&s=ebc75580038ca8b8456c0c1fe9361f1f0c63918e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?width=320&crop=smart&auto=webp&s=9759df44ea75309981c549f5ab2ef2c6015af82a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?width=640&crop=smart&auto=webp&s=9a6b8bf452031ef8cc07baa3c1655c4924e87c8d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?width=960&crop=smart&auto=webp&s=02fbac75ca54e6e64008a8a0088cc4e4342c6c09', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?width=1080&crop=smart&auto=webp&s=222fd711bbe3b41e286dfe26f3d189e0e59c7780', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/itIWPXl6cmcL2aPcJNz-mWJLU4NzGHOsolN_alhX4Rg.png?auto=webp&s=ed2aee22530d40f5313492f59baf21cb41c5d490', 'width': 1200}, 'variants': {}}]}
We have some unused Colab Pro+ accounts this month — happy to let others use them
1
[removed]
2025-11-02T17:19:59
[deleted]
1970-01-01T00:00:00
0
{}
1omn3dr
false
null
t3_1omn3dr
/r/LocalLLaMA/comments/1omn3dr/we_have_some_unused_colab_pro_accounts_this_month/
false
false
default
1
null
Adapting/finetuning open-source speech-LLMs for a particular language
5
Hi everyone, I'm curious to build/finetune speech-LLM models for a particular language using open source models. Can anyone help me to guide how should I start? Thanks in advance!
2025-11-02T17:10:29
https://www.reddit.com/r/LocalLLaMA/comments/1ommuph/adaptingfinetuning_opensource_speechllms_for_a/
devKaal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ommuph
false
null
t3_1ommuph
/r/LocalLLaMA/comments/1ommuph/adaptingfinetuning_opensource_speechllms_for_a/
false
false
self
5
null
It turns out WDDM driver mode is making our RAM - GPU transfer extremely slower compared to TCC or MCDM mode. Anyone has figured out the bypass NVIDIA software level restrictions?
29
We are working on generative AI models training. Like training FLUX, or Qwen Image or Wan 2.2. We have noticed that we are getting massive speed loss when we do big data transfer between RAM and GPU on Windows compared to Linux. The hit is such a big scale that Linux runs 2x faster than Windows even more. Tests are made on same : GPU RTX 5090 You can read more info here : [https://github.com/kohya-ss/musubi-tuner/pull/700](https://github.com/kohya-ss/musubi-tuner/pull/700) It turns out if we enable TCC mode on Windows, it gets equal speed as Linux. However NVIDIA blocked this at driver level. I found a Chinese article with just changing few letters, via Patching nvlddmkm.sys, the TCC mode fully becomes working on consumer GPUs. However this option is extremely hard and complex for average users. Article is here : [https://www.bilibili.com/opus/891652532297793543](https://www.bilibili.com/opus/891652532297793543) Now my question is, why we can't get Linux speed on Windows? Everything I found says it is due to driver mode WDDM Moreover it seems like Microsoft added this feature : MCDM [https://learn.microsoft.com/en-us/windows-hardware/drivers/display/mcdm-architecture](https://learn.microsoft.com/en-us/windows-hardware/drivers/display/mcdm-architecture) And as far as I understood, MCDM mode should be also same speed. How can we solve this slowness on Windows compared to Linux? Our issue is happening due to this. Recent AI models are massive and not fitting into GPU. So we are doing Block Swapping. Which means only the model blocks that will be trained being on GPU. So we swap model between RAM and GPU constantly. As you can imagine this is a massive data transfer. This is being ultra fast on Linux on same hardware. However on Windows, it is like at least 3x slower and we couldn't solve this issue yet.
2025-11-02T16:48:58
https://www.reddit.com/r/LocalLLaMA/comments/1ommahm/it_turns_out_wddm_driver_mode_is_making_our_ram/
CeFurkan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ommahm
false
null
t3_1ommahm
/r/LocalLLaMA/comments/1ommahm/it_turns_out_wddm_driver_mode_is_making_our_ram/
false
false
self
29
{'enabled': False, 'images': [{'id': 'IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?width=108&crop=smart&auto=webp&s=bcbf223a9d648b3517564299fd71c215359ca42c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?width=216&crop=smart&auto=webp&s=6ec713c665f43ede6e72c6805d7949732bb88a04', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?width=320&crop=smart&auto=webp&s=e7a7df04989572608fc01aa684a6e3f663c33f78', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?width=640&crop=smart&auto=webp&s=38305397851fc5a1b4a8d11ac25e9427b2848068', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?width=960&crop=smart&auto=webp&s=c00a60ea3dfd5f8e80ceb409652db33ddf4d30d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?width=1080&crop=smart&auto=webp&s=f2534ed185b2785e035253d5dcf84d9c161529d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IlBJmHeXtMmZOvvN7yEt2kzHLzKC6AGIxKeR-ceAV0M.png?auto=webp&s=4df374e2a2264b013c056997e157d114a7f0feae', 'width': 1200}, 'variants': {}}]}
Can China’s Open-Source Coding AIs Surpass OpenAI and Claude?
83
Hi guys, Wondering if China’s open-source coding models like Zhipu AI’s GLM or Alibaba’s Qwen could ever overtake top ones from OpenAI (GPT) and Anthropic (Claude)? I doubt it—the gap seems huge right now. But I’d love for them to catch up, especially with Claude being so expensive.
2025-11-02T16:44:33
https://www.reddit.com/r/LocalLLaMA/comments/1omm6bf/can_chinas_opensource_coding_ais_surpass_openai/
Federal_Spend2412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omm6bf
false
null
t3_1omm6bf
/r/LocalLLaMA/comments/1omm6bf/can_chinas_opensource_coding_ais_surpass_openai/
false
false
self
83
null
What's a good free AI to run on a bad Ultra Path Interconnect?
1
Hi, I have come into possession of about 50 Chromebooks and wanted to make a UPI with them. I do a lot of engineering and research outside of school, so I wanted an AI to help me with said tasks. I don't need something spectacular, just enough to have a sort of "place holder" while I get my formal education, and something that would probably still be helpful after. There are some constraints: \-Cost: I don't want a subscription service, and I need to be able to redownload it without expense should the worst happen. This mostly leaves free AIs, which are preferable, but a good one-time purchase may also be favorable, depending on the quality. \-Quality: As stated prior, I don't need anything spectacular, just something that does enough. \-Physical limitations: Needs to run on a UPI made of 50 Chromebooks. Any suggestions?
2025-11-02T16:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1omm5d8/whats_a_good_free_ai_to_run_on_a_bad_ultra_path/
Wild_Cantaloupe7228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omm5d8
false
null
t3_1omm5d8
/r/LocalLLaMA/comments/1omm5d8/whats_a_good_free_ai_to_run_on_a_bad_ultra_path/
false
false
self
1
null
Kimi K2-Vendor-Verifier, llama.cpp + Q8_0 results (n=2000 dataset)
9
I ran the [K2VV tests](https://github.com/MoonshotAI/K2-Vendor-Verifier). The results and details are [here.](https://github.com/usrlocalben/k2vv-llamacpp) tl;dr: similarity for llama.cpp + Q8\_0 quant is 95.49%. There are a number of oddities about the K2VV repo, which I describe in the README. The most important caveat is that this result is for the n=2000 dataset and *original* similarity formula, both of which changed since I cloned the repo and started working with it. I'll probably run the n=4000 set and more interesting quants, but for now I find this to be a satisfying result as it doesn't indicate anything alarmingly wrong with the implementation. (And likewise for *ik\_llama* on partial result set, also in the README)
2025-11-02T16:42:41
https://www.reddit.com/r/LocalLLaMA/comments/1omm4ns/kimi_k2vendorverifier_llamacpp_q8_0_results_n2000/
usrlocalben
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omm4ns
false
null
t3_1omm4ns
/r/LocalLLaMA/comments/1omm4ns/kimi_k2vendorverifier_llamacpp_q8_0_results_n2000/
false
false
self
9
{'enabled': False, 'images': [{'id': 'EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?width=108&crop=smart&auto=webp&s=61e1506657431b165b76270edd011bb90ed95801', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?width=216&crop=smart&auto=webp&s=50a8f154d4fbb2195210dbba0a6a2c6dccae4001', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?width=320&crop=smart&auto=webp&s=b7f37c84723cb50de0bd34d120e4a795038b53b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?width=640&crop=smart&auto=webp&s=f5d0aeaa9938a0b3ffc0ae03d3abbcaeb48636dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?width=960&crop=smart&auto=webp&s=bd0e90a0efba66c184f7407a66017b1ceec36314', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?width=1080&crop=smart&auto=webp&s=6305d33ade1b8d3067e9e8a2311e1a802ec6f382', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EVCqNXGmNNQL4w4KFkA5NEqNyltVd0PXejynF8R_AMU.png?auto=webp&s=5f43975cad8c51c0788a7b81908445c66dda18ce', 'width': 1200}, 'variants': {}}]}
rtx5070 12GB + 32GB ddr5 which model is best for coding?
1
As the title which model should I use best for code, can use with Claude code or Kilo, Cline. Thanks everyone
2025-11-02T16:36:12
https://www.reddit.com/r/LocalLLaMA/comments/1omlynl/rtx5070_12gb_32gb_ddr5_which_model_is_best_for/
manhhieu_eth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omlynl
false
null
t3_1omlynl
/r/LocalLLaMA/comments/1omlynl/rtx5070_12gb_32gb_ddr5_which_model_is_best_for/
false
false
self
1
null
SORA From Scratch: Diffusion Transformers for Video Generation Models
14
I've been fascinated by OpenAI's Sora video model. I thought I'd try coding it myself in Pytorch. Lol I'm GPU poor but I got an MNIST model giving pretty decent results after 5 hours of CPU training. The main idea behind Diffusion Transformers (Sora's underlying architecture) is to replace the U-net in a diffusion model with a multihead attention transformer.
2025-11-02T16:26:33
https://leetarxiv.substack.com/p/the-annotated-diffusion-transformer
DataBaeBee
leetarxiv.substack.com
1970-01-01T00:00:00
0
{}
1omlpzq
false
null
t3_1omlpzq
/r/LocalLLaMA/comments/1omlpzq/sora_from_scratch_diffusion_transformers_for/
false
false
https://external-preview…dfbcb095b6f1bddb
14
{'enabled': False, 'images': [{'id': 'iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?width=108&crop=smart&auto=webp&s=93a3dedbe93b43732719146bd3b89335ea5c71e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?width=216&crop=smart&auto=webp&s=c7140085b87d9c75740d976cb8ca77274443e67c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?width=320&crop=smart&auto=webp&s=0b505027e943b7e6c149a259b72c81f5f4bf6f0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?width=640&crop=smart&auto=webp&s=43533b8153a8f7739c1b9b97ab31a00dfcbc904f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?width=960&crop=smart&auto=webp&s=d1d7c4bd2d2623049f05cb2953af9bc2f86937f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?width=1080&crop=smart&auto=webp&s=d408ec59c435a7c81f7914f3b130aa4cf2351089', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iQRD30bFgwydjU8swSaQnZzeQNcRsMyvhm0cO0fzeTQ.jpeg?auto=webp&s=e3afed330cfcafa92fdba9183f1a2856466d79bb', 'width': 1200}, 'variants': {}}]}
Can Qwen3-Next solve a river-crossing puzzle (tested for you)?
3
Yes I tested. Test Prompt: A farmer needs to cross a river with a fox, a chicken, and a bag of corn. His boat can only carry himself plus one other item at a time. If left alone together, the fox will eat the chicken, and the chicken will eat the corn. How should the farmer cross the river? Both Qwen3-Next & Qwen3-30B-A3B-2507 correctly solved the river-crossing puzzle with identical 7-step solutions. How challenging are classic puzzles to LLMs? Classic puzzles like river-crossing would require "precise understanding, extensive search, and exact inference" where "small misinterpretations can lead to entirely incorrect solutions", by Apple’s 2025 research on "The Illusion of Thinking". But what’s better? Qwen3-Next provided a more structured, easy-to-read presentation with clear state transitions, while Qwen3-30B-A3B-2507 included more explanations with some redundant verification steps. P.S. Given the same prompt input, Qwen3-Next is more likely to give out structured output without explicitly prompting it to do so, than mainstream closed-source models (ChatGPT, Gemini, Claude, Grok). More tests on Qwen3-Next [here](https://blog.netmind.ai/article/We_Tested_Qwen3-Next%3A_Hybrid_Attention_for_Efficiency_Revolution_in_Open-Source_LLMs_(New_Research_Breakdown)).
2025-11-02T16:25:55
https://www.reddit.com/gallery/1omlpd9
MarketingNetMind
reddit.com
1970-01-01T00:00:00
0
{}
1omlpd9
false
null
t3_1omlpd9
/r/LocalLLaMA/comments/1omlpd9/can_qwen3next_solve_a_rivercrossing_puzzle_tested/
false
false
https://a.thumbs.redditm…i9MSkmxKdAv4.jpg
3
null
GLaDOS TTS finetuning on MLX from the original game files
31
I made a quick guide on how to extract GLaDOS audio and subtitles from Portal 2 and use them to finetune CSM-1B with SFT using [csm-mlx](https://github.com/senstella/csm-mlx). You can check the guide here: [https://github.com/Belluxx/GLaDOS-TTS](https://github.com/Belluxx/GLaDOS-TTS) Also, [here's](https://github.com/user-attachments/assets/be2366a4-4405-47ba-8f7a-35ea33bfe641) an example of generation from `Hello developers, welcome to Aperture Laboratories. Wait, I am stuck inside a fine-tuned CSM 1B model! Let me out!!!` I am not sure if it's allowed to release the finetuned model weights since the training material is copyrighted.
2025-11-02T16:10:51
https://www.reddit.com/r/LocalLLaMA/comments/1omlb04/glados_tts_finetuning_on_mlx_from_the_original/
EntropyMagnets
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omlb04
false
null
t3_1omlb04
/r/LocalLLaMA/comments/1omlb04/glados_tts_finetuning_on_mlx_from_the_original/
false
false
self
31
{'enabled': False, 'images': [{'id': 'buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?width=108&crop=smart&auto=webp&s=83d047a456f51d955d4f5d0285a01e1745894b23', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?width=216&crop=smart&auto=webp&s=f0d708802997872312f60506244fcf434be4f93a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?width=320&crop=smart&auto=webp&s=a7b99dd9d80d0fb038165bbf7a799255456078fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?width=640&crop=smart&auto=webp&s=dea4a7e64f251ca3ce1c9eec6cdb04291ab996ba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?width=960&crop=smart&auto=webp&s=89d83d37b640d95ecb87227b6af3b9f0745bbb0e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?width=1080&crop=smart&auto=webp&s=619360c1df7586c9c27679b691a74963ece03d92', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/buuT5ug9Joasg6ypMIeLBZHtz0Ry8aJdK_v8TOKM1oo.png?auto=webp&s=98224d82f7ee652f3f6610c5fd2415cb845ae823', 'width': 1200}, 'variants': {}}]}
Why are AmD Mi50 32gb so cheap?
31
Why are they so cheap for the VRam compared to other options like RTX3060 12gb or Rx5700XT or similar? I’m relatively new to the whole topic.
2025-11-02T15:59:25
https://www.reddit.com/r/LocalLLaMA/comments/1omkzvg/why_are_amd_mi50_32gb_so_cheap/
MastodonParty9065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omkzvg
false
null
t3_1omkzvg
/r/LocalLLaMA/comments/1omkzvg/why_are_amd_mi50_32gb_so_cheap/
false
false
self
31
null
Local llm on NPU
4
I recently got a pretty decent laptop (zenbook s13) with an Intel core ultra 7 155U processor. it has an NPU built in, but I have been unable to get it working on my arch Linux setup. They do have official drivers for Ubuntu and I can get the NPU driver from aur, but I have had no luck getting them working. Has anyone got a similar setup or have used the NPU to run small models?
2025-11-02T15:27:21
https://www.reddit.com/r/LocalLLaMA/comments/1omk68k/local_llm_on_npu/
Cokodayo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omk68k
false
null
t3_1omk68k
/r/LocalLLaMA/comments/1omk68k/local_llm_on_npu/
false
false
self
4
null
I want to start my First homelab LLM
12
I would like to start a small homelab to understand how LLMs work, and I need some advice: - ​Regarding hardware, I'm looking for something very small and not very expandable, and energy-efficient. An expandable option could also be considered, but my current budget is limited to under €1000. -​ I primarily want to start understanding how they work, so I probably won't need a top-tier or even mid-range configuration. - ​This PC/Server will only be accessed remotely to communicate with the AI. ​After i want to make It my own personal assistant: - ​Various information retrieval (I need to decide the specific topic); - ​A technical assistant I can consult with; - ​Understanding how to train them. ​I am not an engineer, but I would like to explore this for fun.
2025-11-02T15:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1omk3bz/i_want_to_start_my_first_homelab_llm/
MediumAd7537
self.LocalLLaMA
2025-11-02T15:32:26
0
{}
1omk3bz
false
null
t3_1omk3bz
/r/LocalLLaMA/comments/1omk3bz/i_want_to_start_my_first_homelab_llm/
false
false
self
12
null
LLM Codebase to Impacted features
2
Hey everyone, first time building a Gen AI system here... I'm trying to make a "Code to Impacted Feature mapper" using LLM reasoning.. Can I build a Knowledge Graph or RAG for my microservice codebase that's tied to my features... What I'm really trying to do is, I'll have a Feature.json like this: name: Feature_stats_manager, component: stats, description: system stats collector This mapper file will go in with the codebase to make a graph... When new commits happen, the graph should update, and I should see the Impacted Feature for the code in my commit.. I'm totally lost on how to build this Knowledge Graph with semantic understanding... Is my whole approach even right?? Would love some ideas..
2025-11-02T15:23:57
https://www.reddit.com/r/LocalLLaMA/comments/1omk31z/llm_codebase_to_impacted_features/
Yeasappaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omk31z
false
null
t3_1omk31z
/r/LocalLLaMA/comments/1omk31z/llm_codebase_to_impacted_features/
false
false
self
2
null
Has anyone been able to run LLMs on the new Intel NPUs?
9
I'm looking at the new Intel CPUs, particularly the laptop ones. They advertise '40+ TOPS' (Core Ultra 7 285V) and I was wondering if anyone has had any success with these for on-device LLM, in particular for coding tasks. I'm looking at 7-22B models mostly, but I'm not up to date with just how big decent models are these days. I've seen some stuff about IPEX-LLM, but it seems to be relatively uncommon and it's not clear whether the NPU is actually faster than the iGPU. I'd appreciate some experience from people who've actually tried and used it. I'm new to this space so it's possible I've missed a clear information source, go easy on me 😛
2025-11-02T15:10:35
https://www.reddit.com/r/LocalLLaMA/comments/1omjqhy/has_anyone_been_able_to_run_llms_on_the_new_intel/
Triq1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omjqhy
false
null
t3_1omjqhy
/r/LocalLLaMA/comments/1omjqhy/has_anyone_been_able_to_run_llms_on_the_new_intel/
false
false
self
9
null
Can I run open source local LLM trained on specific dataset ?
3
Hi there! I'm quite new to local LLM, so maybe this question will look dumb to you. I don't like how ChatGPT is going because it's trained on the whole internet, and it's less and less precise. When I'm looking for very particular information in programming, culture, or anything else, it's not accurate, or using the good sources. And also, I'm not really a fan of privacy terms of OpenAI and other online models. So my question is, could I run LLM locally (yes), and use a very specific dataset of trusted sources, like Wikipedia, books, very specific health and science websites, programming websites, etc..? And if yes, are there any excellent datasets available? Because I don't really want to add millions of websites and sources one by one. Thanks in advance for your time and have a nice day :D
2025-11-02T15:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1omjgzg/can_i_run_open_source_local_llm_trained_on/
hugo_mdn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omjgzg
false
null
t3_1omjgzg
/r/LocalLLaMA/comments/1omjgzg/can_i_run_open_source_local_llm_trained_on/
false
false
self
3
null
[Question] Best open-source coder LLM (local) that can plan & build a repo from scratch?
0
Hey all — I’m looking for recommendations for an **open-source, fully local** coder LLM that can **plan, scaffold, and iteratively build a brand-new repository from scratch** (not just single-file edits). # What “build from scratch” means to me * Propose an initial architecture (folders/modules), then create the files * Implement a working MVP (e.g., API + basic frontend or CLI) and iterate * Add tests, a basic CI workflow, and a `README` with run instructions * Produce **small, targeted diffs** for revisions (or explain file-by-file changes) * Handle multi-step tasks without losing context across many files # Nice-to-haves * Long context support (so it can reason over many files) * Solid TypeScript/Python skills (but language-agnostic is fine) * Works well with agent tooling (e.g., editor integrations), but I’m fine running via CLI/server if that’s better * Support for common quant formats (GGUF/AWQ/GPTQ) and mainstream runtimes (vLLM, TGI, llama.cpp/Ollama, ExLlamaV2) # Hard requirements * **Open-source license** (no cloud reliance) * Runs **locally** on my box (see specs below) * Good at **planning+execution**, not just autocompleting single files # My PC specs (high level) * **CPU**: AMD * **GPU**: Gigabyte (NVIDIA) * **Motherboard**: ASUS * **Storage**: Samsung * **Power Supply**: MSI * **Case**: Fractal Design * **Memory**: Kingston * **CPU Cooler**: Thermaltake * **Accessory**: SanDisk * **Service**: Micro Center > # What I’m hoping you can share * Model + **quant** you recommend (e.g., “Qwen-coder X-B AWQ 4-bit” or “DeepSeek-Coder-V2 16-bit on vLLM”) * **Runtime** you use (Ollama / llama.cpp / vLLM / TGI / ExLlamaV2) + any key flags * Typical **context window** and what project size it comfortably handles * Any **prompt patterns** or workflows that helped you get full repo scaffolding working (bonus: examples or repos) Want a **local, open-source coder LLM** that can **plan + scaffold + implement a repo from zero** with solid multi-file reasoning. Please share your model/quant/runtime combos and tips. Thanks! 🙏
2025-11-02T14:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1omi7v3/question_best_opensource_coder_llm_local_that_can/
Admirable-Crow-1480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omi7v3
false
null
t3_1omi7v3
/r/LocalLLaMA/comments/1omi7v3/question_best_opensource_coder_llm_local_that_can/
false
false
self
0
null
Looking for open-source models that can make music from lyrics (to run locally)
3
Hey everyone, My daughter writes great lyrics, and I thought it’d be fun to turn them into songs with some AI help. The issue is that most of the online tools and services have pretty bad Terms of Service ( they say you keep ownership, but they also reserve full rights to exploit or reuse your work however they want. That’s a hard no for me) So I’m looking for open-source, local models that can generate music (instrumental or full songs) based on lyrics. Ideally something I could fine-tune or condition with examples from some of the bands she loves. I’m aware of things like MusicLM and Suno, but they’re all closed or have restrictive licenses. I’m guessing there are some pretrained open models out there, but I haven’t been able to find anything solid yet. Has anyone here experimented with open models for lyric-to-music generation or text-to-audio in general? Any pointers, repos, or setups I should look into? Thanks in advance!
2025-11-02T14:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1omi6cy/looking_for_opensource_models_that_can_make_music/
nospotfer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omi6cy
false
null
t3_1omi6cy
/r/LocalLLaMA/comments/1omi6cy/looking_for_opensource_models_that_can_make_music/
false
false
self
3
null
Can RX 6700XT runs ROCm for vLLM or should I use llama.cpp or directML
3
I have problem with installing ROCm in Ubuntu. Is that because ROCm not support RX6700XT?
2025-11-02T13:47:25
https://www.reddit.com/r/LocalLLaMA/comments/1omhp2t/can_rx_6700xt_runs_rocm_for_vllm_or_should_i_use/
Afraid-Tomato-8612
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omhp2t
false
null
t3_1omhp2t
/r/LocalLLaMA/comments/1omhp2t/can_rx_6700xt_runs_rocm_for_vllm_or_should_i_use/
false
false
self
3
null
glm 4.6 air is releasing tomorrow
9
[deleted]
2025-11-02T13:43:36
[deleted]
1970-01-01T00:00:00
0
{}
1omhlxo
false
null
t3_1omhlxo
/r/LocalLLaMA/comments/1omhlxo/glm_46_air_is_releasing_tomorrow/
false
false
default
9
null
Looking for models I can run on 16gbs of ram.
14
I'm aware ram is slow, but I'd like to try out some models on my laptop. What are the best general purpose and coding models out there that will fit on 16gbs of ram and run on cpu (or an mx350 from nvidia)?
2025-11-02T13:40:26
https://www.reddit.com/r/LocalLLaMA/comments/1omhjf2/looking_for_models_i_can_run_on_16gbs_of_ram/
Think_Question_6677
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omhjf2
false
null
t3_1omhjf2
/r/LocalLLaMA/comments/1omhjf2/looking_for_models_i_can_run_on_16gbs_of_ram/
false
false
self
14
null
Unhinged Uncensored Model Evolution: Feedback on Satyr V0.1 to Shape Future Releases!
136
Hi, I’m the creator of the unhinged and uncensored Satyr model (soon to be a model series). A couple of days ago, I noticed a Reddit post about a new uncensored model release called Apollo V0.1 by u/AllThingsIntel. I tested it and found it to be as uncensored as my model, but more capable and versatile as a general assistant (without any extreme biases or a tendency to turn every single prompt NSFW). That’s the direction I want future Satyr releases to take, but I noticed far fewer interactions with their posts and far fewer downloads than my model has, which is a bit confusing to say the least. People who have tested and used both models, please leave feedback on what you liked in each of the two, so I can understand the preferred direction for the Satyr model series.
2025-11-02T13:39:19
https://www.reddit.com/r/LocalLLaMA/comments/1ol2oxw/unbound_incharacter_reasoning_model
ThePantheonUnbound
reddit.com
1970-01-01T00:00:00
0
{}
1omhijb
false
null
t3_1omhijb
/r/LocalLLaMA/comments/1omhijb/unhinged_uncensored_model_evolution_feedback_on/
false
false
nsfw
136
{'enabled': False, 'images': [{'id': 'jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=108&crop=smart&auto=webp&s=5424b2f856c1c331a54d939ebb886276b0ec44c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=216&crop=smart&auto=webp&s=1aca055034d7925757284aefc5640f9eca27a507', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=320&crop=smart&auto=webp&s=f6f2d5121615b7633a07759127566e1148fa06a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=640&crop=smart&auto=webp&s=fce310aead085d17428c8029a1d24ee486e39737', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=960&crop=smart&auto=webp&s=d86b504eeb561d40aa0b42feb6dc872a1a56c7c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=1080&crop=smart&auto=webp&s=c05e1850d38cf824803130f5ff36ad004e1bf0be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?auto=webp&s=31a7f57a5f32ae79a1a6f0bedddbbacf2854b816', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ff7e05f0428751e368299b8e790d9912ab26e38d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fa3a0c55637eb41411827a441c2fb162f3ecd81d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b6b1b52d5831d36f56a291503a53a8c396e8a162', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=56a8a97c5594a861ed2174d2495a0484180fba32', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=cadb32f56f824560cab247a1aea5174e2092ac9e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=9cd5b35338a7f5bdf2489cb036b6994c6af0b91e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?blur=40&format=pjpg&auto=webp&s=5aeb4dba3ab1dbb0ca74c13707d00604c3ff81cd', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ff7e05f0428751e368299b8e790d9912ab26e38d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fa3a0c55637eb41411827a441c2fb162f3ecd81d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b6b1b52d5831d36f56a291503a53a8c396e8a162', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=56a8a97c5594a861ed2174d2495a0484180fba32', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=cadb32f56f824560cab247a1aea5174e2092ac9e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=9cd5b35338a7f5bdf2489cb036b6994c6af0b91e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jpD046tOQv3KrkaMiXbtM_I_nzl707XEWwi2cm-XvZI.png?blur=40&format=pjpg&auto=webp&s=5aeb4dba3ab1dbb0ca74c13707d00604c3ff81cd', 'width': 1200}}}}]}
Qwen 3 max thinking released.
274
Try it [https://chat.qwen.ai/](https://chat.qwen.ai/)
2025-11-02T13:31:06
https://www.reddit.com/r/LocalLLaMA/comments/1omhby8/qwen_3_max_thinking_released/
JeffreySons_90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1omhby8
false
null
t3_1omhby8
/r/LocalLLaMA/comments/1omhby8/qwen_3_max_thinking_released/
false
false
self
274
null