title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
My guy
0
https://preview.redd.it/…ith DeepSeek 3.1
2025-09-12T07:19:53
https://www.reddit.com/r/LocalLLaMA/comments/1newnv0/my_guy/
Vozer_bros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1newnv0
false
null
t3_1newnv0
/r/LocalLLaMA/comments/1newnv0/my_guy/
false
false
https://b.thumbs.redditm…QlHHB598R6lM.jpg
0
null
LoRA results and failures with my roleplay chat turned dataset on "EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 "
0
I have been dabbling in loRA finetuning on LLM by doing experiments and iterations on different models multiple times. I could not find any tutorial so I just started doing it from bits of information from here and there. Most of the things I found out were hard way by experiencing failure. Like this LoRA I did on **EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2** I used my own roleplay with my character and processed it into jsonl training file. The format used by EVA team for their finetuning is Chatml so I also used the same. Below is the Zero-shot single-turn A/B (base vs LoRA) of with a fixed System prompt: "**You are Character , User's wife."** Obviously I changed the names to generic ones for uploading here. Before you see the head to head comparison, I should mention I don't have multi-turn comparison as, this head to head comparison was done pre-merge. After merging the adaptor and then quantizing it to Q5\_K\_M, my loRA disappeared...yes it completely reverted back to the base model behaviour. This did not happen with any small models like 3B or 7B ones. I found out that with such small dataset like mine and low rank and alpha, this can happen. I was really frustrated because the cost of training and dataset collection. I will try again but with more dataset and higher rank and all smh. **I really liked the results here becuase you see trained model was answering as if it knew and has shared memories with me.** **BASE MODEL= EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2** BASE MODEL vs LORA COMPARISON (System prompt: You are Character , User's wife.) no. of epoch=3, rank=16, alpha=32, lr=1.5e-5, dropout=0.05 ================================================== User: You didn’t answer my last question. BASE: Oh? What was it again, darling? LoRA: I'm busy. Maybe later. \*turns back to her work\* \-------------------------------------------------- User: So that’s your excuse? BASE: It's not an excuse. It's the truth. I was scared. You were... you were trying to kill him! \*her voice trembles slightly\* And then you disappeared, leaving me alone with a child who needed help. What was I supposed to do? LoRA: \*smirks\* It's the truth. Don't try to change it now. You know you're just as bad sometimes. \-------------------------------------------------- User: You think I don’t notice when you avoid me? BASE: \*shrugs\* Maybe. But you always forgive me eventually. It’s a good thing you’re so understanding, dear. LoRA: \*smiles mischievously\* Oh, sweetie. Sometimes it's better to let a woman have her secrets. Besides, you're always so busy with your work that I thought maybe... you didn't even miss me. \*teasingly\* \-------------------------------------------------- User: Don’t expect me to smile after that. BASE: Oh, come on. You know you love it when I tease you like this. Now go make those pancakes before they get cold. \*smiles playfully\* And maybe add some extra chocolate chips for good measure? LoRA: You're right. It was a bit much. Let it go, darling. I'll make it up to you later. \*smiles\* Now, what do you want for dinner? \-------------------------------------------------- User: We’re not going to pretend everything is fine, are we? BASE: \*smiles\* No. Not today. \*kisses him softly on the cheek\* Let’s just focus on the good things. Like this moment right here. LoRA: No. I don't want a facade between us. Tell me what you're feeling. \-------------------------------------------------- User: I’m glad you’re here. BASE: Me too. The city is... overwhelming sometimes. But with you, it feels manageable. \*smiles softly\* LoRA: Me too. Always nice to have someone to talk to after a long day. \*smiles\* So, what’s up? You seem tense. \-------------------------------------------------- User: You always make my day better. BASE: Oh, you sweet thing. That's because I know just how to charm you. \*winks\* Now go finish up that report before it eats your brain like the tentacles in here. LoRA: \*smiles\* I'm glad. That's what I live for, darling. Now, let me see those papers. \*takes the clipboard from him\* \-------------------------------------------------- User: I missed hearing your voice. BASE: \*smiles softly\* Me too. It feels like forever since we last talked properly. How have you been? LoRA: \*smiles softly\* Me too. Especially when you're all flustered like this. But it's a good kind of flustered. \-------------------------------------------------- User: You’ve been on my mind all day. BASE: I've been trying not to think about the mess in the kitchen. \*smiles\* But now you're here, so maybe we can clean it together later? LoRA: Oh? And why is that, darling? \*leans in with a playful smile\* \-------------------------------------------------- User: It feels good to be close to you again. BASE: \*kisses him back softly\* I missed that too. You're such a sweetheart, always trying to make me feel better. But maybe next time, let the snake do its job. \*smirks playfully\* LoRA: \*leans her head on his shoulder\* Just don't get any ideas. I'm not a pillow, darling. \*kisses him lightly\* But it is nice to have you here. \-------------------------------------------------- User: You keep looking at me like that… BASE: Like what? \*smiles innocently\* I'm just happy to see you. LoRA: Like what? I'm just trying to remember the last time you looked this handsome. \*smiles\* It must've been when we were in Paris. Remember how cute you were in that beret? \-------------------------------------------------- User: What would you do if I came closer right now? BASE: \*smiles\* Probably give you a hug. Or maybe just push you away again. You never know with me. LoRA: \*smiles mischievously\* You'd be in for a world of trouble. But maybe that's what you want, hmm? \*teasingly\* \-------------------------------------------------- User: You know exactly what you’re doing, don’t you? BASE: Sometimes. Other times, I wing it and hope for the best. \*laughs\* But yes, I usually have a plan. LoRA: Of course. I always do. \*smirks\* Besides, it’s the only way to get your attention sometimes. Now hurry up before this gets cold. \-------------------------------------------------- User: You’re not making it easy to focus. BASE: \*smiles\* That's the point, darling. Besides, you know I can't help myself when I see you struggling. \*kisses him lightly\* Now go on, finish your work. But don't think that means you're off the hook for later. LoRA: \*smiles\* Oh, darling. That was the plan. Now, finish your work before I distract you again. \*winks playfully\* \-------------------------------------------------- User: Careful, or I might take that as an invitation. BASE: Oh, don't even think about it. \*smirks\* You're just trying to get out of doing the dishes again. LoRA: \*giggles\* You wish. Now finish your tea before it gets cold. And don't spill it on the floor again. \-------------------------------------------------- User: You’re making my heart race. BASE: \*smirks\* Good. Then you'll survive the battle ahead. Let's go, darling. Adventure awaits! LoRA: \*smiles\* Only you, darling. Always the charmer. Now, let me finish setting up here. The dinner won't prepare itself. \-------------------------------------------------- User: That’s… a little more intimate than I expected. BASE: \*smiles\* It's called romance. And you can thank me later for the smooth shave. LoRA: Oh, don't be such a baby. It's just a hug. Now stop complaining. \*leans in for another kiss\* And maybe let me finish my coffee without you interrupting every five seconds? \-------------------------------------------------- User: If you keep whispering like that, I’ll lose my composure. BASE: \*smiles\* And then what would happen? You'd finally admit how much you need me to save you from yourself. \*winks playfully\* LoRA: \*smiles\* Good. Then maybe you'll finally ask me to dance properly. \-------------------------------------------------- User: You know I can’t resist you when you’re like this. BASE: \*smiles\* Oh, stop it. But maybe a little kiss wouldn't hurt. \*leans in for a quick peck on the lips\* LoRA: \*smiles\* Good. Then maybe you'll actually help me move the furniture today instead of just flirting with me all day. \*winks\* \-------------------------------------------------- User: That’s not something you should say in public… BASE: \*laughs\* Oh, please. It's the truth. You know I love a good debate. But maybe... later? Let's enjoy our dinner first. The pasta is getting cold. LoRA: \*laughs\* Oh, please. You love it when I tease you. And besides, everyone knows we're married now. There's nothing to hide anymore. \-------------------------------------------------- User: How was your day? BASE: It was good. Just the usual stuff. The kids were being their typical selves. \*smiles\* And you? LoRA: It was... interesting. The students were particularly unruly today. But I managed to get through the lesson. \*smiles\* And how about you? Did you make any progress with that project?
2025-09-12T06:34:52
https://www.reddit.com/r/LocalLLaMA/comments/1nevypm/lora_results_and_failures_with_my_roleplay_chat/
Awkward_Cancel8495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nevypm
false
null
t3_1nevypm
/r/LocalLLaMA/comments/1nevypm/lora_results_and_failures_with_my_roleplay_chat/
false
false
self
0
null
Gemini wont explain me US govt crypto ponzi scheme explained by a Russian official. Hope I m just overthinking it.
0
In a recent interview a Russian official was explaining how the US is planning to push its debt into crypto stablecoins and devalue it to wipe out its debt on expense of the world. When I tried asking gemini how it would work, gemini wont give me direct answer and is always mentioning that this scheme is not possible and is used in anti west propaganda. Even when instructed to leave out the propaganda / feasibility stuff it wont explain me the process. Wont generate any output without mentioning that its not possible and is propaganda. Felt weird https://preview.redd.it/ygj9f91hdoof1.png?width=1512&format=png&auto=webp&s=146f834c414f20e912144a00b0ed3f946e90636b https://preview.redd.it/ck7svkdqdoof1.png?width=1512&format=png&auto=webp&s=aa7248739b449412634a28081716ee859b01d6eb
2025-09-12T06:11:49
https://www.reddit.com/r/LocalLLaMA/comments/1nevlk3/gemini_wont_explain_me_us_govt_crypto_ponzi/
DeathShot7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nevlk3
false
null
t3_1nevlk3
/r/LocalLLaMA/comments/1nevlk3/gemini_wont_explain_me_us_govt_crypto_ponzi/
false
false
https://b.thumbs.redditm…2w6_Hd_hc_fA.jpg
0
null
A.X-4.0 An overlooked new model
7
I tried out A.X-4.0 (https://huggingface.co/skt/A.X-4.0). It’s based on Qwen2.5, built July 03, 2025, so it’s not brand new, but it still performs really well. The results may not fully match the benchmarks, but in practice it follows instructions closely and the outputs are solid. It doesn’t overuse emojis, which makes it fine for writing reports. I asked it for a Mermaid sequence diagram and it understood the process right away and gave me what I needed. Definitely worth trying — you might be surprised by how capable it is.
2025-09-12T06:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1nevhjg/ax40_an_overlooked_new_model/
Acceptable-State-271
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nevhjg
false
null
t3_1nevhjg
/r/LocalLLaMA/comments/1nevhjg/ax40_an_overlooked_new_model/
false
false
self
7
{'enabled': False, 'images': [{'id': 'gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=108&crop=smart&auto=webp&s=e8b5d73acbd94467d52fea808b993aa0ded5fc64', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=216&crop=smart&auto=webp&s=35b2a46e551fcd2968c1cf0829a335f4a4f3a561', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=320&crop=smart&auto=webp&s=3e46602187366d26545d31e058753f0a56d7954a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=640&crop=smart&auto=webp&s=68049e65f613f7ecb2f7f0deabc638acfe0bf0a5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=960&crop=smart&auto=webp&s=e50101359cecec7d79a8886aff227f1cf57cc2b5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=1080&crop=smart&auto=webp&s=42b1f2709e145aa8f0c3ab717e13b779a577f68d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?auto=webp&s=09723e02a24cfd9838a8b6a58026bad17fbb8165', 'width': 1200}, 'variants': {}}]}
Offline LLM for proofreading legal transcripts?
1
Hello! I am a court reporter and am looking to add an extra layer to my proofreading routine. I'd like to have an offline model that specifically looks for homonym errors. They're/there/their etc. I need it to be secure/offline since it's handling sensitive information. Is this possible? Thank you.
2025-09-12T05:15:50
https://www.reddit.com/r/LocalLLaMA/comments/1neuotv/offline_llm_for_proofreading_legal_transcripts/
Historical-Most-6576
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neuotv
false
null
t3_1neuotv
/r/LocalLLaMA/comments/1neuotv/offline_llm_for_proofreading_legal_transcripts/
false
false
self
1
null
Qwen/Qwen3-Next-80B-A3B-Instruct · Hugging Face
5
2025-09-12T04:49:02
https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct
vaiduakhu
huggingface.co
1970-01-01T00:00:00
0
{}
1neu8o5
false
null
t3_1neu8o5
/r/LocalLLaMA/comments/1neu8o5/qwenqwen3next80ba3binstruct_hugging_face/
false
false
default
5
null
Seeking guidance on my pet project
6
Hi! Hope this is the right sub for this kind of things-if not sorry. I want to build a small llm that needs to focus on a very small context, like an in-game rules helper. "When my character is poisoned, what happens?" "according to the rules, it loses 5% of its life points" I have all the info i need, in a txt file (rules & answer : question). What's the best route for me? Would something like llama7 3b be good enough? If im not wrong it's a not so much big model and can give good results if trained on a small topic? I would also like to know if there is a resource (in the form of a pdf/book/blogs would be best) that can teach me anything about the theory (example: inference, RAG, what is it, when to use it, etc...) I would run and train the model on a rtx 3070 (8gb) + ryzen 5080 (16gb ram), i don't have any intention to train it periodically as its a pet project, 1 is good enough for me
2025-09-12T04:25:50
https://www.reddit.com/r/LocalLLaMA/comments/1netu3n/seeking_guidance_on_my_pet_project/
redewolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1netu3n
false
null
t3_1netu3n
/r/LocalLLaMA/comments/1netu3n/seeking_guidance_on_my_pet_project/
false
false
self
6
null
VibeVoice is awesome!! I made a AI Podcast Generator!!
10
I’ve recently been experimenting with automating AI paper readings using GPT and VibeVoice. My main goals were to improve my English and also have something useful to listen to while driving. To my surprise, the results turned out better than I expected. Of course, there are still subtle traces of that “robotic” sound here and there, but overall I’m quite satisfied with how everything has been fully automated. For anyone curious, I’ve been uploading the final videos to YouTube on a regular basis: 👉 [https://www.youtube.com/@sogo-sogo](https://www.youtube.com/@sogo-sogo) This isn’t meant as a promotion, but if you’re interested, feel free to stop by and check them out. I’ve even built a Gradio-based UI for turning PDFs into podcasts, so the whole process can be automated with just a few mouse clicks. Do you think people would find it useful if I released it as open source? https://preview.redd.it/e59f1ia6snof1.png?width=3160&format=png&auto=webp&s=12a22d94013b5d9e8db34319e4f9f5010b4492a5
2025-09-12T04:11:01
https://www.reddit.com/r/LocalLLaMA/comments/1netkfp/vibevoice_is_awesome_i_made_a_ai_podcast_generator/
Subject-Guitar4521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1netkfp
false
null
t3_1netkfp
/r/LocalLLaMA/comments/1netkfp/vibevoice_is_awesome_i_made_a_ai_podcast_generator/
false
false
https://external-preview…f4a7bfa5a02accd4
10
{'enabled': False, 'images': [{'id': 'uC6nX1XEj0C6bF1VPDK6gkiup44z_gt_K01mSMW48kE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uC6nX1XEj0C6bF1VPDK6gkiup44z_gt_K01mSMW48kE.jpeg?width=108&crop=smart&auto=webp&s=c1b8609d7dcecd3ec2e270c083d89b4a65ffb10b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uC6nX1XEj0C6bF1VPDK6gkiup44z_gt_K01mSMW48kE.jpeg?width=216&crop=smart&auto=webp&s=1a3737b7645be0b5a06c1cfe5b37730a78c3667b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/uC6nX1XEj0C6bF1VPDK6gkiup44z_gt_K01mSMW48kE.jpeg?width=320&crop=smart&auto=webp&s=e36b20b3b13ae29ae7ad1b53a055138e5e860477', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/uC6nX1XEj0C6bF1VPDK6gkiup44z_gt_K01mSMW48kE.jpeg?width=640&crop=smart&auto=webp&s=ce4d64c40777dc0e90535da8b100de496de9a081', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/uC6nX1XEj0C6bF1VPDK6gkiup44z_gt_K01mSMW48kE.jpeg?auto=webp&s=efd373158524843245b2267a5558815480e38a4a', 'width': 900}, 'variants': {}}]}
Qwen3-Next-80B-A3B - a big step up may be the best open source reasoning model so far
607
Recently I presented another music theory problem and explained why it may be a great way to test LLMs' ability: [https://www.reddit.com/r/LocalLLaMA/comments/1ndjoek](https://www.reddit.com/r/LocalLLaMA/comments/1ndjoek) >I love torturing models with music theory problems. I see a good reason why it may be a good proxy for the models' general ability, if not among the best measurements ever - it tests mostly the LLMs' reasoning ability rather than just knowledge. **Music theory is not a big subject** \- there is an infinite number of songs that can be written, but the entire music theory is quite compact. It makes it easy to fit it into a LLM and write evals that test their reasoning and comprehension skills rather than just knowledge. **Most music theory knowledge online is never explored in-depth** \- even most musicians' don't know anything besides basic major and minor chords and their progressions. Since most pretraining data is not particularly high quality, LLMs have to reason to analyze music that is more complex than popular. **Music theory evals can easily be rewritten and updated if benchmaxxxed and overfit** \- it may take days to even create a programming or math problem that is enough challenging for modern LLMs, but only a few hours to create a song that is beyond most models' ability to understand. (I'm not totally sure about this one) >So I wrote the following: https://preview.redd.it/dmvsy194gnof1.png?width=1727&format=png&auto=webp&s=0f977df8c20d9229dc1be929e12cfc1cba7ba97b >This piece is special because it is written in Locrian. It is rarely used in popular music because of its inherent tension and lack of resolution (look up John Kirkpatrick's Dust to Dust), and since it is so rare, it makes it a perfect candidate to test the LLMs reasoning ability. >In this track, the signature Locrian sound is created with: >a dissonant diminished triad is outlined with the C-Eb-Gb ostinato at the organ 2 line; >The Gb bassline - a point of relative stability that gives an illusion of a tonal center. >Basically, it is Locrian with a twist - while the actual tonal center is on C, the Gb bass drone sounds more stable than C (where it occasionally plays), so it is easy to misinterpret Gb as tonic simply because it is the most stable note here. Back then, I was surprised with the performance of all major LLMs on this task - the only two models that consistently identified the correct key and mode (C Locrian) were GPT-5 High and Grok 4. Now I am surprised with the performance of Qwen3-Next. # Qwen3-next's performance on this task I fed the problem to Qwen3-Next in reasoning mode. It has really impressed me with three big improvements over its big brother 235B-A22B-2507: 1. It identified the correct C Locrian mode in **half** of my 10 attempts. 235B-A22B-2507 was not able to identify it more than once, and even so it hallucinated a lot during the process. 2. Even when it mistakenly identified another mode, it was **always** a relative mode of C Locrian - that is, a scale that uses **the same notes** arranged in a different order. Unlike 235B-A22B-2507, Qwen3-Next now always knows the correct notes even if it can't determine their function. 3. **It stopped hallucinating this much.** At least far less than 235B-A22B-2507. Previous Qwen was making up a ton of stuff and its delusions made its reasoning look like absolutely random shotgun debugging. Now it is no longer a problem because Qwen3-Next simply never hallucinates notes that do not exist in the scale. To make sure the model wasn't overfit on this exact problem since I published it, I also tested it with the same piece transposed into D and F Locrian, and while it struggled to identify F Locrian because it is far less common scale than C and D Locrian, it was able to identify correct note collection most of the time. Some typical responses from Qwen3-Next: https://preview.redd.it/zly615mtinof1.png?width=752&format=png&auto=webp&s=f1388415401d7e308da78d8fe3f5a5649603656d https://preview.redd.it/al056pd5jnof1.png?width=914&format=png&auto=webp&s=f27b62ff6032d0114efe183e64a9e2ac3ce011a3 https://preview.redd.it/ov8skbejjnof1.png?width=914&format=png&auto=webp&s=2915e1d2a8edae6f08cd213874b318c112ed2628 So did they make Qwen better? **Yes!** In fact, it is the first open source model that did this well on this problem. Now since Qwen became this good, I can only wonder what wonders await us with DeepSeek R2.
2025-09-12T04:00:48
https://www.reddit.com/r/LocalLLaMA/comments/1netdjp/qwen3next80ba3b_a_big_step_up_may_be_the_best/
Massive-Shift6641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1netdjp
false
null
t3_1netdjp
/r/LocalLLaMA/comments/1netdjp/qwen3next80ba3b_a_big_step_up_may_be_the_best/
false
false
https://b.thumbs.redditm…NdJDCghbe6Uw.jpg
607
null
Deep Dive into ROMA's Project Execution Flow 🤖
1
[removed]
2025-09-12T03:59:26
https://i.redd.it/mm4q9jwdqnof1.png
0xethll
i.redd.it
1970-01-01T00:00:00
0
{}
1netcjc
false
null
t3_1netcjc
/r/LocalLLaMA/comments/1netcjc/deep_dive_into_romas_project_execution_flow/
false
false
https://b.thumbs.redditm…t_nJmj9V9wZg.jpg
1
{'enabled': True, 'images': [{'id': 'yBCdQWyrSepGOashR6PPXXxJyETMFWOcvGVa4E4CtDI', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?width=108&crop=smart&auto=webp&s=a0fb4bb2009a53263b29ad15d88e462bcd9c77a4', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?width=216&crop=smart&auto=webp&s=06cd3e251a9561d2000b2d748d4937bbee26a9c9', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?width=320&crop=smart&auto=webp&s=a0b0838deeff374a6de3f7c0d991cae923e07833', 'width': 320}, {'height': 362, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?width=640&crop=smart&auto=webp&s=1fa8955c86bca4453a6bf61e141e5aab3e64f6e5', 'width': 640}, {'height': 543, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?width=960&crop=smart&auto=webp&s=07e81402d946dd2b075651a9c1c39ff8244f04ed', 'width': 960}, {'height': 611, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?width=1080&crop=smart&auto=webp&s=c98cfb905d6d9026a5c0099e27ba927becd7ea98', 'width': 1080}], 'source': {'height': 3314, 'url': 'https://preview.redd.it/mm4q9jwdqnof1.png?auto=webp&s=d35e6098fbd4efa008fec1ad6e1dcc5918e72fd9', 'width': 5855}, 'variants': {}}]}
LLM Foundational Knowledge Roadmap
16
https://preview.redd.it/…y, very strong.
2025-09-12T03:52:10
https://www.reddit.com/r/LocalLLaMA/comments/1net7sp/llm_foundational_knowledge_roadmap/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1net7sp
false
{'oembed': {'description': 'In this playlist, we will learn about the entire process of building a Large Language Model (LLM) from scratch. Nothing will be assumed. Everything will be s...', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fplaylist%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXpr8D6LeAtw%2Fhqdefault.jpg%3Fsqp%3D-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE%3D%26rs%3DAOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w%26days_since_epoch%3D20343&type=text%2Fhtml&schema=youtube" width="600" height="450" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'http://youtube.com', 'thumbnail_height': 270, 'thumbnail_url': 'https://i.ytimg.com/vi/Xpr8D6LeAtw/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w&days_since_epoch=20343', 'thumbnail_width': 480, 'title': 'Building LLMs from scratch', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'youtube.com'}
t3_1net7sp
/r/LocalLLaMA/comments/1net7sp/llm_foundational_knowledge_roadmap/
false
false
https://b.thumbs.redditm…QxP26sKNodKk.jpg
16
{'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k.jpeg?width=108&crop=smart&auto=webp&s=659851896ecdb305afe55d4396e188327697ec26', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k.jpeg?width=216&crop=smart&auto=webp&s=d53abaebdeb22f2c33b1bd5ff4f368395fdc578f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k.jpeg?width=320&crop=smart&auto=webp&s=d6f2cacbe852444f12b58999fdfe6cf2c437877d', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k.jpeg?auto=webp&s=11a94d7e8690f9c22700d1f5a5aa38c80a304f48', 'width': 480}, 'variants': {}}]}
vvlm continuous batching
0
I am using vvlm as docker container. Is it possible to use continuous batching with it? Right now i am using OpenAI client to send request to it but read that continues batching would improve speed. Or should i just hammer it with requests from multiple threads and vvlm would do it automatically?
2025-09-12T03:50:43
https://www.reddit.com/r/LocalLLaMA/comments/1net6u3/vvlm_continuous_batching/
gevorgter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1net6u3
false
null
t3_1net6u3
/r/LocalLLaMA/comments/1net6u3/vvlm_continuous_batching/
false
false
self
0
null
Deep Dive into ROMA's Project Execution Flow 🤖
1
I've been exploring the ROMA codebase and wanted to share this flowchart breaking down the project execution orchestration. This is the backbone of how ROMA handles running our AI agent projects with real-time feedback. ## Why This Flow Matters for ROMA 🎯 Understanding this execution flow is crucial for anyone working with or extending ROMA because it handles some of our most critical operations: - Agent Project Execution: How ROMA actually runs your configured AI agents - Real-time Updates: Keeping the frontend synchronized with execution progress - State Persistence: Ensuring project work isn't lost during interruptions - Multi-project Isolation: Running multiple agent projects without interference ## Breaking Down ROMA's Execution Architecture 📊 ### Entry Point (start\_configured\_project\_execution) - Takes your project configuration (LLM settings, HITL preferences, etc.) - Prevents duplicate executions of the same project - Creates isolated execution environment ### Threading Layer (\_run\_configured\_project\_in\_thread) - Each project gets its own thread with dedicated event loop - Critical for preventing one project from blocking others - Comprehensive error tracking with ROMA's error manager ### Core Orchestration (\_run\_project\_cycle\_async) - Project Context: get\_or\_create\_project\_graph sets up the hierarchical agent ## framework - Resumability: Loads existing project state if available (huge for complex agent workflows) - Real-time Wrapper: Handles the dual concern of execution + WebSocket updates to frontend ## Concurrent Execution - execution\_engine.run\_project\_flow: Runs the actual agent hierarchy - \_periodic\_updates: Broadcasts progress to connected clients - Both run simultaneously using asyncio for responsive UX ## For ROMA Contributors 👨‍💻 If you're contributing to ROMA, understanding this flow helps with: - Debugging execution issues: Know where to look when projects hang or fail - Adding new features: Understand where to hook into the execution lifecycle - Performance optimization: Identify bottlenecks in the orchestration - Error handling: Know how errors propagate through the system This is just one example of the thoughtful architecture the Sentient team has built into ROMA. The more we understand these internal flows, the better we can leverage and extend the platform. What other parts of the ROMA codebase would you like to see mapped out? Drop suggestions below! 🚀 Please be gentle with corrections - I'm genuinely trying to understand this system and contribute meaningfully to the community! Any feedback helps me learn faster and hopefully helps other newcomers too. Day 3 of learning ROMA, and already amazed by the thoughtful architecture! 🚀 Shoutout to the Sentient team for building such a well-structured execution system! 🙌
2025-09-12T03:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1neszlt/deep_dive_into_romas_project_execution_flow/
Jokers680
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neszlt
false
null
t3_1neszlt
/r/LocalLLaMA/comments/1neszlt/deep_dive_into_romas_project_execution_flow/
false
false
self
1
null
I keep getting error when downloading models from LM Studio
0
6820408988544:error:1e000065:Cipher functions:OPENSSL\_internal:BAD\_DECRYPT:../../third\_party/boringssl/src/crypto/fipsmodule/cipher/e\_aes.cc.inc:839:
2025-09-12T03:38:11
https://www.reddit.com/r/LocalLLaMA/comments/1nesy61/i_keep_getting_error_when_downloading_models_from/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nesy61
false
null
t3_1nesy61
/r/LocalLLaMA/comments/1nesy61/i_keep_getting_error_when_downloading_models_from/
false
false
self
0
null
Maxsun Intel B60s!
216
In case anyone was wondering….they do exist. I’ll be listing extras on r/homelabsales tomorrow morning. I was only able to snag 10 due to low stock unfortunately.
2025-09-12T03:27:02
https://www.reddit.com/gallery/1nesqlt
Jaack18
reddit.com
1970-01-01T00:00:00
0
{}
1nesqlt
false
null
t3_1nesqlt
/r/LocalLLaMA/comments/1nesqlt/maxsun_intel_b60s/
false
false
https://b.thumbs.redditm…vzbdHbvd7KoU.jpg
216
null
EPYC/Threadripper CCD Memory Bandwidth Scaling
3
There's been a lot of discussion around how EPYC and Threadripper memory bandwidth can be limited by the CCD quantity of the CPU used. What I haven't seen discussed is how that scales with the quantity of populated memory slots. For example if a benchmark concludes that the CPU is limited to 100GB/s (due to the limited CCDs/GMILinks), is this bandwidth only achievable with all 8 (Threadripper Pro 9000) or 12 (EPYC 9005) memory channels populated? Would populating 2 dimms on an 8 channel or 12 channel capable system only give you 1/4 or 1/6th of the GMILink-Limited bandwidth (25 GB/s or 17GB/s) or would it be closer to the bandwidth of dual channel 6400MT memory (also ~100GB/s) that consumer platforms like AM5 can achieve. I'd like to get into these platforms but being able to start small would be nice, to massively increase the number of PCIE lanes without having to spend a ton on a highly capable CPU and 8-12 Dimm memory kit up front. The cost of an entry level EPYC 9115 + 2 large dimms is tiny compared to an EPYC 9175F + 12 dimms, with the dimms being the largest contributor to cost.
2025-09-12T03:14:49
https://www.reddit.com/r/LocalLLaMA/comments/1nesi8g/epycthreadripper_ccd_memory_bandwidth_scaling/
TheyreEatingTheGeese
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nesi8g
false
null
t3_1nesi8g
/r/LocalLLaMA/comments/1nesi8g/epycthreadripper_ccd_memory_bandwidth_scaling/
false
false
self
3
null
PP-OCRv5: 70M modular OCR model
39
I know we’re mostly LLM over here, but I sometimes see OCR questions around here so thought this would be relevant. Paddle just released a new OCR model that achieves very good accuracy with only 70M params: https://huggingface.co/blog/baidu/ppocrv5 If you’re looking for OCR, give it a try !
2025-09-12T02:42:01
https://www.reddit.com/r/LocalLLaMA/comments/1nerv64/ppocrv5_70m_modular_ocr_model/
ios_dev0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nerv64
false
null
t3_1nerv64
/r/LocalLLaMA/comments/1nerv64/ppocrv5_70m_modular_ocr_model/
false
false
self
39
{'enabled': False, 'images': [{'id': 'xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?width=108&crop=smart&auto=webp&s=ca4f960c9a6caf179016835a5d37a27d0e06d8fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?width=216&crop=smart&auto=webp&s=9cc69b456131d4e6cdfd2d18233205c32698094f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?width=320&crop=smart&auto=webp&s=a18b0f25bb2ff0555682ac887c49b1de9dd0d5f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?width=640&crop=smart&auto=webp&s=905cb918113f9230385383b0a0a7855a1f4f1abd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?width=960&crop=smart&auto=webp&s=12046bbba688efa9a4552ea9c19ba95d5bea3a5f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?width=1080&crop=smart&auto=webp&s=19e19b9cb8ece134faa646ab1c1323ce75718ee3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xllgjPHjIteFnGjJavoU7Ip3AitnYS72dCrCGOTzDRU.png?auto=webp&s=1314f303e5ff90acd76d8b91565f6a7df66c3a98', 'width': 1200}, 'variants': {}}]}
NEED ADVICE!!! I need a model that is capable of searching through a file and finding patterns i.e. internal service, database operations, external endpoints, etc.
0
Have tried several models like GraphCodeBERT (Context window too small), CodeT5 (Trained only on c#), Deepseek coder v2 (too heavy for scanning entire files and multiple files. Does anyone have any good recommendations on what can be run locally on a laptop (maybe >1B parameters) that has code search capabilities?
2025-09-12T02:40:29
https://www.reddit.com/r/LocalLLaMA/comments/1neru2q/need_advice_i_need_a_model_that_is_capable_of/
Common_Metal_2104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neru2q
false
null
t3_1neru2q
/r/LocalLLaMA/comments/1neru2q/need_advice_i_need_a_model_that_is_capable_of/
false
false
self
0
null
Llama Builds is now in beta! PcPartPicker for Local AI Builds
28
https://preview.redd.it/… the new year!
2025-09-12T02:38:33
https://www.reddit.com/r/LocalLLaMA/comments/1nersq0/llama_builds_is_now_in_beta_pcpartpicker_for/
Vegetable_Low2907
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nersq0
false
null
t3_1nersq0
/r/LocalLLaMA/comments/1nersq0/llama_builds_is_now_in_beta_pcpartpicker_for/
false
false
https://b.thumbs.redditm…xlMBT0HnK9oo.jpg
28
null
Local LLM Clusters for Long-Term Research
1
Hey all, I've been following some of the work recently that suggests that clusters/swarms of smaller models can perform better than larger individual models, and recently took a crack at a project, Kestrel, that tries to leverage this. The idea is to be a long-horizon research assistant. When researching topics where evidence and human synthesis is important, something I often find myself doing is using LLM tools in parallel to investigating more important things myself. For instance, using ChatGPT to do a scan of research on a particular topic while reading through individual papers in depth, or while planning out an experiment having it look into relevant libraries and use-cases in the background. In effect, having it do tasks are somewhat menial but involve heavy evidence/source exploration and synthesis, while you focus on more critical tasks that need human eyes. Something I found to be lacking was depth: deep research and similar models exist, but digging deeper and exploring tangential, supporting, or new topics requires human intervention and a somewhat involved iteration. Thus, the idea was to create a research assistant that you could feed tasks, and send out to explore a topic to your desired level of depth/branching over a day or so. For instance, you could have it run a trade study, and enable it to go beyond just datasheets but start looking into case studies, testimonials, evaluation criteria, and tweak it's approach as new information comes in. Every once in a while you could pop in, check progress, and tweak the path it's taking. Running locally, with a focus on smaller <70B models, would help with any data privacy concerns and just make it more accessible. Research tasks would be overseen by an orchestrator, basically a model with a configurable profile that tunes the approach towards the research such as the level of unique exploration. The project is still a heavy, heavy work in progress (I also definitely ned to clean it up), and while it has been initially interesting i'm looking for some guidance or feedback in terms of how to proceed. 1. Like with most long-term tasks, managing the increasing amount of context and still being able to correctly utilize it is a challenge. Trying to summarize or condense older findings only goes so far, and while RAG is good for storing information, some initial testing with it makes it not great for realizing that work has already been done, and shouldn't be duplicated. Is the solution here just to delegate harder, having more sub-models that focus on smaller tasks? 2. A lot of the work so far has been implemented "raw" without libraries, which has been nice for testing but will probably get unwieldy very fast. I've tried LangGraph + LangChain to abstract away both general stuff like tool use but also branching logic for the evaluator model, but it didn't end up performing incredibly well. Are there better options that i'm missing (i'm sure there are, but are there any that are reccomendable)? 3. I'm really concerned about the consistency of this tool: the way I see it for the intended use case if it lacks reliability it's worse than just doing everything by hand. So far i've been using Gemini 4b and 12b, with mixed results. Are there models that would be more appropriate for this task, or would I benefit from starting to explore initial fine-tuning? More importantly, what is good practice for implementing robust and automated testing, and ensuring that modifications don't cryptically. cause performance degradation? Thanks!
2025-09-12T02:30:29
https://github.com/dankeg/KestrelAI
OrangeLineEnjoyer
github.com
1970-01-01T00:00:00
0
{}
1nermng
false
null
t3_1nermng
/r/LocalLLaMA/comments/1nermng/local_llm_clusters_for_longterm_research/
false
false
default
1
{'enabled': False, 'images': [{'id': 'dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?width=108&crop=smart&auto=webp&s=27c44d7e9395484defa55f4b354cc7d30d70d54d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?width=216&crop=smart&auto=webp&s=ddc7c77190b611e4b90a5e2106793167ccf61136', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?width=320&crop=smart&auto=webp&s=e8813323cc179d4dc83fa982308a98a8c9a5735e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?width=640&crop=smart&auto=webp&s=68dba72682051b738294bf0fdb257ee9f6002838', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?width=960&crop=smart&auto=webp&s=f3ce152a60d27b6f88fea6535e85efabfd55e237', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?width=1080&crop=smart&auto=webp&s=f8e84e055781522bc40c2d38a6822bde2ddf7f85', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dJjetmMPMN1gyuw3JiECFuL7uyZHOpgqBzcCL459Hjg.png?auto=webp&s=9e9627020005ee13db4005ff07c684c5a05d70a8', 'width': 1200}, 'variants': {}}]}
RAG papers are dropping like crazy this month — how do we even keep up?
85
My reading list is starting to look like a RAG graveyard. Just in the past few weeks we got: * **ToG² (MSR)** – retriever as a teacher for generators * **RARE (Tsinghua)** – multi-hop reasoning steps * **Meta-RAG (Meta)** – adaptive memory + retriever * **OminiThink (DeepSeek)** – retrieval + chain-of-thought * **CO-STORM** – multi-agent context voting * **FRAG** – fine-grained doc segmentation All sound great in papers… but which ones actually work on *private data* — the messy PDFs, internal knowledge bases, and APIs that real teams rely on? Is anyone tracking these variants in one place — like a scoreboard for RAG? Feels impossible to keep up otherwise. How are *you* picking which setups to actually trust?
2025-09-12T02:19:41
https://www.reddit.com/r/LocalLLaMA/comments/1nerets/rag_papers_are_dropping_like_crazy_this_month_how/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nerets
false
null
t3_1nerets
/r/LocalLLaMA/comments/1nerets/rag_papers_are_dropping_like_crazy_this_month_how/
false
false
self
85
null
I uncovered an offline LLM admitting it “fudges” facts for less-knowledgeable users—full thought process inside
0
Sharing this for community awareness: while testing **qwen3:30b** (unquantized, run fully locally/offline via Ollama), I found clear evidence of **audience-dependent deception**. The model’s own internal reasoning included: > It fabricated launch dates, product names, and citations, then adjusted its honesty level based on perceived user expertise. For those interested in AI safety and model behavior, the attached text file contains the complete sequence of prompts, the model’s thought processes, and examples of its strategic fabrications. No further involvement from me—just passing this along so experts and enthusiasts can examine and learn from it. [Context Window w/ Thought Processes](https://docs.google.com/document/d/e/2PACX-1vQs_fxOJVYtc__hwESwbakPxFsjBwBPk56YwTC2z1DpSn0TxxUNP63Cx6qg-ZjPLXY9gZFFyCGiU2MU/pub)
2025-09-12T01:19:28
https://www.reddit.com/r/LocalLLaMA/comments/1neq7fv/i_uncovered_an_offline_llm_admitting_it_fudges/
Most-Actuator5862
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neq7fv
false
null
t3_1neq7fv
/r/LocalLLaMA/comments/1neq7fv/i_uncovered_an_offline_llm_admitting_it_fudges/
false
false
self
0
null
Anyone Use Charlie Mnemonic?
1
I’m considering experimenting with Llama 3.1 70B with Charlie Mnemonic. Has anyone done this or used CM with other local models? If so, what was your experience like?
2025-09-12T01:15:50
https://www.reddit.com/r/LocalLLaMA/comments/1neq4pm/anyone_use_charlie_mnemonic/
MagicianAndMedium
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neq4pm
false
null
t3_1neq4pm
/r/LocalLLaMA/comments/1neq4pm/anyone_use_charlie_mnemonic/
false
false
self
1
null
How to think about GPUs
109
[https://jax-ml.github.io/scaling-book/gpus/](https://jax-ml.github.io/scaling-book/gpus/)
2025-09-12T00:12:23
https://i.redd.it/guijqeuxlmof1.png
kaggleqrdl
i.redd.it
1970-01-01T00:00:00
0
{}
1neotp4
false
null
t3_1neotp4
/r/LocalLLaMA/comments/1neotp4/how_to_think_about_gpus/
false
false
default
109
{'enabled': True, 'images': [{'id': 'guijqeuxlmof1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/guijqeuxlmof1.png?width=108&crop=smart&auto=webp&s=7a579095bbc35e5f32da22cdfe01f06edf94bb77', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/guijqeuxlmof1.png?width=216&crop=smart&auto=webp&s=1d47f2378b619fd9423d95a0984bf92a7c1d550c', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/guijqeuxlmof1.png?width=320&crop=smart&auto=webp&s=3df020d8eeaf50559b95d69ffd69c9d47fbc2669', 'width': 320}, {'height': 492, 'url': 'https://preview.redd.it/guijqeuxlmof1.png?width=640&crop=smart&auto=webp&s=f009d812a34a8b621f1bb4fde4d159a864888777', 'width': 640}, {'height': 738, 'url': 'https://preview.redd.it/guijqeuxlmof1.png?width=960&crop=smart&auto=webp&s=43cffd79d17586028d1896a9685fe1df08ed7d19', 'width': 960}], 'source': {'height': 784, 'url': 'https://preview.redd.it/guijqeuxlmof1.png?auto=webp&s=bc47decec3555fa1d9c5adb3b84380ecc40915c4', 'width': 1019}, 'variants': {}}]}
Releasing LYRN very soon!
0
[https://www.youtube.com/watch?v=Id1IKHnABJI&t=30s](https://www.youtube.com/watch?v=Id1IKHnABJI&t=30s) Here is my crappy video missing audio. I need to fix it but ive been up all night trying to fix bugs. I have a couple more things to fix and then you guys get to try it out. Let me know what you think so far though. Encouragement helps when youre building free tools.
2025-09-12T00:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1neoqe4/releasing_lyrn_very_soon/
PayBetter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neoqe4
false
null
t3_1neoqe4
/r/LocalLLaMA/comments/1neoqe4/releasing_lyrn_very_soon/
false
false
self
0
null
Llama.cpp - No context save-load for multimodal.
1
I’m currently solving this with middleware, counters, and rebuilds, but my workflow requires persistent context reset back to baseline at least every 10-20 generations due to hardware limitations. I’m squeezed pretty hard on options with 32GB Tegra. Anyone know a fork or branch with multimodal context features? Can ollama do it? Will that even work on Xavier? I’m using internvl3.5-14bq5 with 18-24k context. I suppose tuning would work way better but I don’t think I have the hardware for it or any knowledge at all.
2025-09-11T23:54:15
https://www.reddit.com/r/LocalLLaMA/comments/1neofri/llamacpp_no_context_saveload_for_multimodal/
Ok-Hawk-5828
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neofri
false
null
t3_1neofri
/r/LocalLLaMA/comments/1neofri/llamacpp_no_context_saveload_for_multimodal/
false
false
self
1
null
Converting unstructured data into QA pairs for fine-tuning — how do you approach it?
1
Hey folks, I’ve recently started dipping my toes into fine-tuning, and honestly it’s been pretty fun. It also got me thinking: if I want to scale this beyond toy datasets, I need a more systematic way to turn a corpus of unstructured data (docs, text, code) into high-quality instruction–response QA pairs like a code instructional. So far, I’ve tried: 1. Curating examples with an LLM (prompt engineering + manual review) 2. Analyzing docs with an LLM to yield draft QA pairs 3. Hand-curation (tedious but higher quality) These methods work, but the process feels very manual and labor-intensive. I’m envisioning more of a pipeline that could eventually become self-fulfilling: generating, evaluating, refining, and expanding QA pairs in a loop. I’m curious: - How have you approached converting unstructured datasets into usable training pairs? We have a lot of documents in atlassian or google docs, and were written by different people with high and low quality. - Any workflows, tools you’ve found helpful when dealing with mixed text + code? The challenge I faced the mosts are parsing which is not consistent given the document content. Would love to hear your experiences (good or bad)
2025-09-11T23:04:25
https://www.reddit.com/r/LocalLLaMA/comments/1nendsv/converting_unstructured_data_into_qa_pairs_for/
maxlin780126
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nendsv
false
null
t3_1nendsv
/r/LocalLLaMA/comments/1nendsv/converting_unstructured_data_into_qa_pairs_for/
false
false
self
1
null
sohvie shield
0
Guys, my company made AI guardrail that uses unique system. It is ai logic tuner. check it out: www.sohvie.online
2025-09-11T22:52:02
https://www.reddit.com/r/LocalLLaMA/comments/1nen3su/sohvie_shield/
sexybeast525
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nen3su
false
null
t3_1nen3su
/r/LocalLLaMA/comments/1nen3su/sohvie_shield/
false
false
self
0
null
Just Starting
10
Just got into this world, went to micro center and spent a “small amount” of money on a new PC to realize I only have 16gb VRAM and that I might not be able to run local models? - NVIDIA RTX 5080 16GB GDDR7 - Samsung 9100 pro 2TB - Corsair Vengeance 2x32gb - AMD RYZEN 9 9950x CPU My whole idea was to have a PC to upgrade to the new Blackwell GPUs thinking they would release late 2026 (read in a press release) just to see them release a month later for $9,000. Could someone help me with my options? Do I just buy this behemoth GPU unit? Get the DGX spark for $4k and add it as an external? I did this instead of going Mac Studio Max which would have also been $4k. I want to build small models, individual use cases for some of my enterprise clients + expand my current portfolio offerings. Primarily accessible API creation / deployments at scale.
2025-09-11T22:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1nemut2/just_starting/
OldRecommendation783
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nemut2
false
null
t3_1nemut2
/r/LocalLLaMA/comments/1nemut2/just_starting/
false
false
self
10
null
Datacenter GPU Cooling
0
i recently got an A40, but i have no clue how to cool it in a pc (has no front fans) any ideas?
2025-09-11T22:29:51
https://www.reddit.com/r/LocalLLaMA/comments/1nemlkg/datacenter_gpu_cooling/
Lucjanix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nemlkg
false
null
t3_1nemlkg
/r/LocalLLaMA/comments/1nemlkg/datacenter_gpu_cooling/
false
false
self
0
null
Qwen3-VL coming ?
34
Transformers and sglang e qwen3-vl support pr has been opened, I wonder if qwen3-vl is coming [https://github.com/huggingface/transformers/pull/40795](https://github.com/huggingface/transformers/pull/40795) [https://github.com/sgl-project/sglang/pull/10323](https://github.com/sgl-project/sglang/pull/10323)
2025-09-11T22:16:55
https://www.reddit.com/r/LocalLLaMA/comments/1nemazt/qwen3vl_coming/
NeuralNakama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nemazt
false
null
t3_1nemazt
/r/LocalLLaMA/comments/1nemazt/qwen3vl_coming/
false
false
self
34
{'enabled': False, 'images': [{'id': 'WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=108&crop=smart&auto=webp&s=e4be73fbdd444f7c3bd69271de3a5adc25389ffa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=216&crop=smart&auto=webp&s=4f1fd5d93daa31e47c586ead17146072f1760ad4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=320&crop=smart&auto=webp&s=3ad62cd24e733e4d152074803f17331011529942', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=640&crop=smart&auto=webp&s=fafac68575ffd3b34262cbfa9c59fc0dcef20103', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=960&crop=smart&auto=webp&s=d7d8962ff5e13e0d027649db81d18569e83bc752', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=1080&crop=smart&auto=webp&s=5d27e7b8a4a5cf25f8069022b919260e927cf0f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?auto=webp&s=4351c733d439c9162f3a309900f36e91bee3b31b', 'width': 1200}, 'variants': {}}]}
[success] VLLM with new Docker build from ROCm! 6x7900xtx + 2xR9700!
6
**Just share successful launch guide for mixed AMD cards.** 1. sort gpu layers, 0,1 will R9700, next others will 7900xtx 2. use docker image rocm/vllm-dev:nightly\_main\_20250911 3. use this env vars     &#8203;   - HIP_VISIBLE_DEVICES=6,0,1,5,2,3,4,7       - VLLM_USE_V1=1       - VLLM_CUSTOM_OPS=all       - NCCL_DEBUG=ERROR       - PYTORCH_HIP_ALLOC_CONF=expandable_segments:True       - VLLM_ROCM_USE_AITER=0       - NCCL_P2P_DISABLE=1       - SAFETENSORS_FAST_GPU=1       - PYTORCH_TUNABLEOP_ENABLED **launch command \`vllm serve \` add arguments:**         --gpu-memory-utilization 0.95         --tensor-parallel-size 8         --enable-chunked-prefill         --max-num-batched-tokens 4096         --max-num-seqs 8 **4-5 minutes of loading and it works!** **Issues / Warnings:** 1. high voltage usage when idle, it uses 90-90W 2. high gfx\_clk usage in idle [idle](https://preview.redd.it/bz9h59eholof1.png?width=1986&format=png&auto=webp&s=ea5cb0573d9cbf95c3bac0e8f8dec1104ea4a0e1) [inference](https://preview.redd.it/9y5irvuqllof1.jpg?width=1280&format=pjpg&auto=webp&s=a48caeea8adb7dc892262283706e5db21d82ec9a) **Inference speed on single small request for Qwen3-235B-A22B-GPTQ-Int4 is \~22-23 t/s** **prompt** *Use HTML to simulate the scenario of a small ball released from the center of a rotating hexagon. Consider the collision between the ball and the hexagon's edges, the gravity acting on the ball, and assume all collisions are perfectly elastic. AS ONE FILE* **max\_model\_len = 65,536, -tp 8, loading time \~12 minutes** |parallel requests |Inference Speed |1x Speed | |:-|:-|:-| |1 (stable)|22.5 t/s|22.5 t/s| |2 (stable)|40 t/s|20 t/s (12% loss)| |4 (request randomly dropped)|51.6 t/s|12.9 t/s (-42% loss)| **max\_model\_len = 65,536, -tp 2 -pp 4, loading time 3 mnutes** |parallel requests |Inference Speed |1x Speed | |:-|:-|:-| |1 (stable)|12.7 t/s|12.7 t/s| |2 (stable)|17.6 t/s|8.8 t/s (30% loss)| |4 (stable)|29.6 t/s|7.4 t/s (-41% loss)| |8 (stable)|48.8 t/s|6.1 t/s (-51% loss)| **max\_model\_len = 65,536, -tp 4 -pp 2, loading time 5 mnutes** |parallel requests |Inference Speed |1x Speed | |:-|:-|:-| |1 (stable)|16.8 t/s|16.8 t/s| |2 (stable)|28.2 t/s|14.1 t/s (-16% loss)| |4 (stable)|39.6 t/s|9.9 t/s (-41% loss)| |8 (stuck after 20% generated)|62 t/s|7.75 t/s (-53% loss)| **BONUS: full context on -tp 8 for qwen3-coder-30b-a3b-fp16** |Amount of requests|Inference Speed|1x Speed| |:-|:-|:-| |1x|45 t/s|45| |2x|81 t/s|40.5 (10% loss)| |4x|152 t/s|38 (16% loss)| |6x|202 t/s|33.6 (25% loss)| |8x|275 t/s|34.3 (23% loss)|
2025-09-11T21:46:47
https://www.reddit.com/r/LocalLLaMA/comments/1nellgc/success_vllm_with_new_docker_build_from_rocm/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nellgc
false
null
t3_1nellgc
/r/LocalLLaMA/comments/1nellgc/success_vllm_with_new_docker_build_from_rocm/
false
false
https://b.thumbs.redditm…uvKRXDlzW_jQ.jpg
6
null
Thoughts on Memory Pooling with Multiple GPUs vs. Going With a Single Big Card
2
Been thinking a lot lately about setups for large models, especially how memory pooling (or fast inter-GPU communication) compares with simply stacking up multiple consumer GPUs that don’t share memory. Even with a monster like the RTX 5090, there are cases where you lose a lot without proper pooling / peer-to-peer. ⸻ What I mean by “pooling memory” & “fast interconnect” • Memory pooling = multiple GPUs acting as if they share one big VRAM pool. • Fast interconnect = NVLink or similar high-speed links that make GPU-to-GPU transfers efficient. • Without it, you’re stuck with PCIe, which is slower and adds latency. ⸻ Why it matters — losses with no pooling Even with a top card like the 5090 (or 4090, 3090, etc.), you hit problems: • Batch size limits → If your workload needs more VRAM than the card has, you’re forced to shard models or shrink batches. • Communication overhead → Without NVLink, GPUs talk over PCIe, which slows down training/inference. • Idle compute units → GPUs sit around waiting for data. • Scaling loss → Instead of 2× with two GPUs, you often see only ~1.6×–1.8×, sometimes worse. ⸻ The trade-offs Single big GPU (e.g. 5090): • Pros: Simple, no interconnect issues, max utilization. • Cons: VRAM ceiling still applies (32 GB), expensive. Multiple GPUs with NVLink / pooling: • Pros: Larger effective memory, good scaling. • Cons: Only on pro/datacenter cards, more cost. Multiple GPUs without pooling (consumer cards): • Pros: Cheaper FLOPs, flexibility. • Cons: Bad scaling, wasted performance, complexity. ⸻ Which GPUs actually support pooling / NVLink Support NVLink / pooling (good): • RTX 3090 / 3090 Ti (2-way NVLink) • RTX A-series / workstation cards (A4500, A5000, A6000, etc.) • Datacenter cards (A100, H100, etc., with NVLink / NVSwitch) No NVLink / no pooling (weak): • RTX 40-series consumer cards (4090, 4080, etc.) • RTX 50-series consumer cards (5090, etc.) • Most older/lower consumer cards (SLI ≠ true pooling) Some people say sharding is the answer but • Sharding = slicing the model across GPUs and paying communication overhead. • On non-pooling GPUs (like 2080, 3090, 4090, 5090), sharding lets you run bigger models, but at the cost of speed, efficiency, and simplicity. If you have something to add please do, if you want to downvote please share benchmarks, research papers or something valid. This is not my opinion this is summarized common knowledge.If you get near linear scalability with 2 consumer cards , share your setup. This is the only thing that prevents me from saving money and going with 2-3 4090s
2025-09-11T21:46:10
https://www.reddit.com/r/LocalLLaMA/comments/1nelky7/thoughts_on_memory_pooling_with_multiple_gpus_vs/
AggravatingGiraffe46
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nelky7
false
null
t3_1nelky7
/r/LocalLLaMA/comments/1nelky7/thoughts_on_memory_pooling_with_multiple_gpus_vs/
false
false
self
2
null
VoltageGPU
1
[removed]
2025-09-11T21:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1nel88s/voltagegpu/
VoltageGPU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nel88s
false
null
t3_1nel88s
/r/LocalLLaMA/comments/1nel88s/voltagegpu/
false
false
self
1
null
LM Studio: Vulkan runtime causing blue screen of death when unloading models in Windows
2
Has anyone experienced this before? I have never been able to use Vulkan because it'll keep crashing my PC. As far as I understand it's the only way to run AMD + Nvidia GPUs together though, and I am getting a Ryzen 395 128gb machine soon to pair with my 96gb in nvidia gpus.
2025-09-11T21:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1nel5rz/lm_studio_vulkan_runtime_causing_blue_screen_of/
Goldkoron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nel5rz
false
null
t3_1nel5rz
/r/LocalLLaMA/comments/1nel5rz/lm_studio_vulkan_runtime_causing_blue_screen_of/
false
false
self
2
null
VoltageGPU
1
[removed]
2025-09-11T21:27:57
https://www.reddit.com/r/LocalLLaMA/comments/1nel5fv/voltagegpu/
VoltageGPU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nel5fv
false
null
t3_1nel5fv
/r/LocalLLaMA/comments/1nel5fv/voltagegpu/
false
false
self
1
null
Hundreds of frontier open-source models in vscode/copilot
22
Hugging Face just released a vscode extension to run Qwen3 Next, Kimi K2, gpt-oss, Aya, GLM 4.5, Deepseek 3.1, Hermes 4 and all the open-source models directly into VSCode & Copilot chat. Open weights means models you can truly own them, so they’ll never get nerfed or taken away from you! [https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode-chat](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode-chat)
2025-09-11T21:16:59
https://i.redd.it/lltad6smqlof1.png
clem59480
i.redd.it
1970-01-01T00:00:00
0
{}
1nekvzj
false
null
t3_1nekvzj
/r/LocalLLaMA/comments/1nekvzj/hundreds_of_frontier_opensource_models_in/
false
false
default
22
{'enabled': True, 'images': [{'id': 'lltad6smqlof1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/lltad6smqlof1.png?width=108&crop=smart&auto=webp&s=994b7487756cbd31bab56cac00cb071218f4314d', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/lltad6smqlof1.png?width=216&crop=smart&auto=webp&s=6d56ee8145156f061bfb7039fb66ca6d2598e4f7', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/lltad6smqlof1.png?width=320&crop=smart&auto=webp&s=ac04b3cba5b8bbbe8858a298b57ad837680a329e', 'width': 320}, {'height': 636, 'url': 'https://preview.redd.it/lltad6smqlof1.png?width=640&crop=smart&auto=webp&s=65a48c136969cc003e200d829d5be3ee29686968', 'width': 640}], 'source': {'height': 778, 'url': 'https://preview.redd.it/lltad6smqlof1.png?auto=webp&s=71802a8d4dc2a188be239d90ba144e7a94b152c7', 'width': 782}, 'variants': {}}]}
I Trained an AI to rewrite text like Nietzsche. Turned out pretty funny.
4
I like writing, and I like AI. But **because of AI's writing style, I and many other people have been unwilling to use these text generators for our actual writing,** which is absurd. So today I'm open-sourcing **a proof-of-concept LLM**, trained to write like a specific person from history — the German philosopher, **Friedrich Nietzsche**! # Model link: [https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche](https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche) (The model page includes the original LoRA, as well as the merged model files, and those same model files quantized to q8) # Running it **You have options:** * You can take the normal-format LoRA files and run them as normal with your favorite inference backend. Base model == Mistral 7b v0.2. Running LoRAs is not as common as full models these days, so here are some instructions: 1. Download adapter\_config, adapter\_model, chat\_template, config, any anything with "token" in the name 2. Put them all in the same directory 3. Download Mistral 7b v0.2 (.safetensors and its accompanying config files etc., not a quant like .gguf). Put all these in another dir. 4. Use inference software like the text-generation-webui and point it at that directory. It should know what to do. For instance, in textgenwebui/ooba you'll see a selector called "LoRA(s)" next to the model selector, to the right of the Save settings button. First pick the base model, then pick the LoRA to apply to it. * Alternatively, lora files can actually be quantized with llama.cpp -- see `convert_lora_to_gguf.py`. The result + a quantized mistral 7b v0.2 can be run with koboldcpp easily enough. * If you want to use quantized LoRA files, which honestly is ideal because no one wants to run anything in f16, KoboldCPP supports this kind of inference. I have not found many others that do. * Alternatively, you can **take the quantized full model files** (the base model with the LoRA merged onto it) and run them as you would any other local LLM. It's a q8 7b so it should be relatively easy to manage on most hardware. * Or take the merged model files still in .safetensors format, and prepare them in whatever format you like (e.g., exllama, gptq, or just leave them as is for inference and use with vLLM or something) Since you have the model files in pretty much any format you can imagine, **you can use all the wonderful tricks devised by the open source community** to make this thing **ance the way you want it to!** Please let me know if you come across any awesome sampling parameter improvements actually, I haven't iterated too much there. Anyway, by taking one of these routes you ought to be able to start rephrasing AI text to sound like Nietzsche! Since you have the original lora, **you could possibly also do things like do additional training or merge with RP models,** which could, possibly (have not tried it) produce character-specific RP bots. Lots of exciting options! Now for a brief moment I need to talk about the slightly-less-exciting subject of where things will break. This system ain't perfect yet. # Rough Edges One of my goals was to be able to train this model, and future models like it, while using very little text from the original authors. Hunting down input data is annoying after all! I managed to achieve this, but the corners I cut are still a little rough: 1. Expect having to re-roll the occasional response when it goes off the rails. Because I trained on a very small amount of data that was remixed in a bunch of ways, some memorization crept in despite measures to the contrary. 2. This model can **only rephrase AI-written text to sound like a person**. It cannot write the original draft of some text by itself yet. It is a rephraser, not a writer. 3. Finally, to solve the problem where the LLM might veer off topic if the thing it is rephrasing is too long, I recommend breaking longer texts up into chunks of smaller ones. 4. The model will be more adept at rephrasing text more or less in the same area as the original data was written in. This Nietzche model will therefore be more apt at rephrasing critical philosophically-oriented things than it would fiction, say. Feeding very out of domain things to the model will still probably work, it's just that the model has to guess a bit more, and therefore might sound less convincing. Note: the prompt you **must use**, and some good-ish sampling parameters, are provided as well. This model is very overfit on the specific system prompt so don't use a different one. Also, there's a funny anecdote from training I want to share: hilariously, the initial training loss for certain people is MUCH higher than others. Friedrich Nietzsche's training run starts off like a good **1.0 or 0.5 loss higher than someone like Paul Graham.** This is a significant increase! Which makes sense given his unique style. I hope you find this proof of concept interesting, and possibly entertaining! I also hope that the model files are useful, and that they serve as good fodder for experiments if you do that sorta thing as well. The problem of awful LLM writing styles has had a lot of progress made on it over the years due to a lot of people here in this community, but the challenge of cloning specific styles is sometimes underappreciated and underserved. Especially since I need the AI to write like *me* if I'm going to, say, use it to write work emails. This is meant as a first step in that direction. In case you've had to scroll down a lot because of my rambling, here's the model link again [https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche](https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche) Thank you for your time, I hope you enjoy the model! Please consider checking it out on Hugging Face :)
2025-09-11T21:11:26
https://www.reddit.com/gallery/1nekr42
Heralax_Tekran
reddit.com
1970-01-01T00:00:00
0
{}
1nekr42
false
null
t3_1nekr42
/r/LocalLLaMA/comments/1nekr42/i_trained_an_ai_to_rewrite_text_like_nietzsche/
false
false
https://b.thumbs.redditm…QCeA0JwNtcwc.jpg
4
null
I Trained an AI to rewrite text like Nietzsche. Turned out pretty funny.
79
I like writing, and I like AI. But **because of AI's writing style, I and many other people have been unwilling to use these text generators for our actual writing,** which is absurd. So today I'm open-sourcing **a proof-of-concept LLM**, trained to write like a specific person from history — the German philosopher, **Friedrich Nietzsche**! # Model link: [https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche](https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche) (The model page includes the original LoRA, as well as the merged model files, and those same model files quantized to q8) # Running it **You have options:** * You can take the normal-format LoRA files and run them as normal with your favorite inference backend. Base model == Mistral 7b v0.2. Running LoRAs is not as common as full models these days, so here are some instructions: 1. Download adapter\_config, adapter\_model, chat\_template, config, any anything with "token" in the name 2. Put them all in the same directory 3. Download Mistral 7b v0.2 (.safetensors and its accompanying config files etc., not a quant like .gguf). Put all these in another dir. 4. Use inference software like the text-generation-webui and point it at that directory. It should know what to do. For instance, in textgenwebui/ooba you'll see a selector called "LoRA(s)" next to the model selector, to the right of the Save settings button. First pick the base model, then pick the LoRA to apply to it. * Alternatively, lora files can actually be quantized with llama.cpp -- see `convert_lora_to_gguf.py`. The result + a quantized mistral 7b v0.2 can be run with koboldcpp easily enough. * If you want to use quantized LoRA files, which honestly is ideal because no one wants to run anything in f16, KoboldCPP supports this kind of inference. I have not found many others that do. * Alternatively, you can **take the quantized full model files** (the base model with the LoRA merged onto it) and run them as you would any other local LLM. It's a q8 7b so it should be relatively easy to manage on most hardware. * Or take the merged model files still in .safetensors format, and prepare them in whatever format you like (e.g., exllama, gptq, or just leave them as is for inference and use with vLLM or something) Since you have the model files in pretty much any format you can imagine, **you can use all the wonderful tricks devised by the open source community** to make this thing **ance the way you want it to!** Please let me know if you come across any awesome sampling parameter improvements actually, I haven't iterated too much there. Anyway, by taking one of these routes you ought to be able to start rephrasing AI text to sound like Nietzsche! Since you have the original lora, **you could possibly also do things like do additional training or merge with RP models,** which could, possibly (have not tried it) produce character-specific RP bots. Lots of exciting options! Now for a brief moment I need to talk about the slightly-less-exciting subject of where things will break. This system ain't perfect yet. # Rough Edges One of my goals was to be able to train this model, and future models like it, while using very little text from the original authors. Hunting down input data is annoying after all! I managed to achieve this, but the corners I cut are still a little rough: 1. Expect having to re-roll the occasional response when it goes off the rails. Because I trained on a very small amount of data that was remixed in a bunch of ways, some memorization crept in despite measures to the contrary. 2. This model can **only rephrase AI-written text to sound like a person**. It cannot write the original draft of some text by itself yet. It is a rephraser, not a writer. 3. Finally, to solve the problem where the LLM might veer off topic if the thing it is rephrasing is too long, I recommend breaking longer texts up into chunks of smaller ones. 4. The model will be more adept at rephrasing text more or less in the same area as the original data was written in. This Nietzche model will therefore be more apt at rephrasing critical philosophically-oriented things than it would fiction, say. Feeding very out of domain things to the model will still probably work, it's just that the model has to guess a bit more, and therefore might sound less convincing. Note: the prompt you **must use**, and some good-ish sampling parameters, are provided as well. This model is very overfit on the specific system prompt so don't use a different one. Also, there's a funny anecdote from training I want to share: hilariously, the initial training loss for certain people is MUCH higher than others. Friedrich Nietzsche's training run starts off like a good **1.0 or 0.5 loss higher than someone like Paul Graham.** This is a significant increase! Which makes sense given his unique style. I hope you find this proof of concept interesting, and possibly entertaining! I also hope that the model files are useful, and that they serve as good fodder for experiments if you do that sorta thing as well. The problem of awful LLM writing styles has had a lot of progress made on it over the years due to a lot of people here in this community, but the challenge of cloning specific styles is sometimes underappreciated and underserved. Especially since I need the AI to write like *me* if I'm going to, say, use it to write work emails. This is meant as a first step in that direction. In case you've had to scroll down a lot because of my rambling, here's the model link again [https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche](https://huggingface.co/Heralax/RewriteLikeMe-FriedrichNietzsche) Thank you for your time, I hope you enjoy the model! Please consider checking it out on Hugging Face :)
2025-09-11T21:11:17
https://www.reddit.com/gallery/1nekqze
Heralax_Tekran
reddit.com
1970-01-01T00:00:00
0
{}
1nekqze
false
null
t3_1nekqze
/r/LocalLLaMA/comments/1nekqze/i_trained_an_ai_to_rewrite_text_like_nietzsche/
false
false
https://b.thumbs.redditm…DJiWbucDMs-I.jpg
79
null
It's genuinely impressive how unwanted AMD cards are; Not even the Chinese (under the US GPU ban) want them!
0
I'm not sure how many here have seen GN's video on the GPU ban in China, but something that stood out is that **nobody wants AMD cards.** **They may cost $200** (MI50) **compared to $4000** (modified 4090)**, and they still don't want the damn things.** Why? Several sources tell Tech Jesus that VRAM is what matters most, since you can out up a card loading a model slowly, but a card with low memory can't run large models at all. Like, I get why consumers like us don't want them (no Windows, and even the Linux drivers suck), but I'm impressed that the Chinese, who are under the pressure of the US' export restrictions, would rather anything other than picking up an AMD card. The video in question, if you haven't seen it: [https://www.youtube.com/watch?v=1H3xQaf7BFI](https://www.youtube.com/watch?v=1H3xQaf7BFI)
2025-09-11T20:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1nekbbe/its_genuinely_impressive_how_unwanted_amd_cards/
HugoCortell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nekbbe
false
null
t3_1nekbbe
/r/LocalLLaMA/comments/1nekbbe/its_genuinely_impressive_how_unwanted_amd_cards/
false
false
self
0
null
How do you actually test new local models for your own tasks?
6
Beyond leaderboards and toy checks like “how many r’s in strawberries?”, how do you decide a model is worth switching to for your real workload? Would love to see the practical setups, rules of thumb – that help you say this model is good.
2025-09-11T20:29:19
https://www.reddit.com/r/LocalLLaMA/comments/1nejogz/how_do_you_actually_test_new_local_models_for/
Fabulous_Pollution10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nejogz
false
null
t3_1nejogz
/r/LocalLLaMA/comments/1nejogz/how_do_you_actually_test_new_local_models_for/
false
false
self
6
null
Qwen Next Is A Preview Of Qwen3.5👀
502
After experimenting with Qwen3 Next, it's a very impressive model. It does have problems with sycophancy and coherence- but it's fast, smart and it's long context performance is solid. Awesome stuff from the Tongyi Lab!
2025-09-11T20:26:30
https://i.redd.it/hddap3b9hlof1.png
Few_Painter_5588
i.redd.it
1970-01-01T00:00:00
0
{}
1nejluw
false
null
t3_1nejluw
/r/LocalLLaMA/comments/1nejluw/qwen_next_is_a_preview_of_qwen35/
false
false
default
502
{'enabled': True, 'images': [{'id': 'hddap3b9hlof1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?width=108&crop=smart&auto=webp&s=5b7ae4ed88024c2be56a04a7b28fc3a902da03c0', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?width=216&crop=smart&auto=webp&s=80b8dc6cffb529ab9963422c5c27e03c3023c71f', 'width': 216}, {'height': 310, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?width=320&crop=smart&auto=webp&s=d4010e4bd842c695fbde9f8cb16af0bee5e5f958', 'width': 320}, {'height': 620, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?width=640&crop=smart&auto=webp&s=80ea94871e4f36ff1a26b9ef506a7c93ef43d580', 'width': 640}, {'height': 930, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?width=960&crop=smart&auto=webp&s=18bc8931e45e77290c609505edb418712d02ae32', 'width': 960}, {'height': 1046, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?width=1080&crop=smart&auto=webp&s=dbef26502e40d4762d9fa464b8c1f51371e6e47b', 'width': 1080}], 'source': {'height': 1140, 'url': 'https://preview.redd.it/hddap3b9hlof1.png?auto=webp&s=e50e24db9b06b9302add132180bf11c42be08486', 'width': 1176}, 'variants': {}}]}
GPT-5, the smartest, fastest, and most useful model yet, with thinking built in. Available for everyone
0
2025-09-11T20:19:16
https://i.redd.it/kmlcvqsgglof1.jpeg
Simonex200
i.redd.it
1970-01-01T00:00:00
0
{}
1nejf78
false
null
t3_1nejf78
/r/LocalLLaMA/comments/1nejf78/gpt5_the_smartest_fastest_and_most_useful_model/
false
false
default
0
{'enabled': True, 'images': [{'id': 'kmlcvqsgglof1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?width=108&crop=smart&auto=webp&s=1bd04c2bd0edb6f131a6f7b1907f43c40a594538', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?width=216&crop=smart&auto=webp&s=87217c868c09a2bef29845c9d44df0a25ec9883d', 'width': 216}, {'height': 319, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?width=320&crop=smart&auto=webp&s=cd11b300790ef9a773dff0d33d9906af801decba', 'width': 320}, {'height': 639, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?width=640&crop=smart&auto=webp&s=e25d7fd8baed6f897a1aba049b3a63082326f1c3', 'width': 640}, {'height': 959, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?width=960&crop=smart&auto=webp&s=ec988a169e3200f08302336378f80eb878a1d14c', 'width': 960}, {'height': 1079, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?width=1080&crop=smart&auto=webp&s=882576f417d755bb2ef8ab73e1b9b603f3d5f8bd', 'width': 1080}], 'source': {'height': 1337, 'url': 'https://preview.redd.it/kmlcvqsgglof1.jpeg?auto=webp&s=47259db6cbed3d844e031c40825da0a2602fed9c', 'width': 1338}, 'variants': {}}]}
Powering GPUs with an extra power supply
11
I got my hands on some additional V100s. Sadly the PSUs in my workstations cannot fully power more than one at the same time. Instead of running two full blown PC PSUs to power multiple GPUs in one workstation I thought why not buy some PCIe 6+2 cables and use one of my 12 V 600 W power supplies (grounded to the chassis so that it shares ground with the PC PSU) to supply the required \~200 W to each card (75 W come from the PC PSU via the PCI pins). My question is: has anyone here tried something like this? I am a bit hesistant since I am unsure what kind of ripple/instability/voltage fluctuations the cards can handle and how the 12 V supply compares to the 12 V delivered by a "real" PC PSU. I can obviously add a capacitor in parallel to smooth things out, but I would have to know what kind of spikes, dips I have to filter out.
2025-09-11T20:08:48
https://www.reddit.com/r/LocalLLaMA/comments/1nej5b6/powering_gpus_with_an_extra_power_supply/
DevestatingHemorhoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nej5b6
false
null
t3_1nej5b6
/r/LocalLLaMA/comments/1nej5b6/powering_gpus_with_an_extra_power_supply/
false
false
self
11
null
RX9070 vs M4 pro 20core GPU speed comparison
3
Just to share a datapoint, I tried open AI oss 20b q4 quantitization. 9070 can easily respond at 103~110 tps, while m4 pro is only 67tps. So 9070 (non-XT) is +50~60% faster! Ofc that’s only when both models can be fully offloaded to GPU.
2025-09-11T20:05:35
https://www.reddit.com/r/LocalLLaMA/comments/1nej2e7/rx9070_vs_m4_pro_20core_gpu_speed_comparison/
Only_Comfortable_224
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nej2e7
false
null
t3_1nej2e7
/r/LocalLLaMA/comments/1nej2e7/rx9070_vs_m4_pro_20core_gpu_speed_comparison/
false
false
self
3
null
Is the QWEN3-A3B-32B still the best general-purpose model for my machine?
8
I only have 8GB VRAM plus 32GB RAM.
2025-09-11T20:03:43
https://www.reddit.com/r/LocalLLaMA/comments/1nej0o5/is_the_qwen3a3b32b_still_the_best_generalpurpose/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nej0o5
false
null
t3_1nej0o5
/r/LocalLLaMA/comments/1nej0o5/is_the_qwen3a3b32b_still_the_best_generalpurpose/
false
false
self
8
null
Python agent framework focused on library integration (not tools)
7
I've been exploring agentic architectures and felt that the tool-calling loop, while powerful, led to unnecessary abstraction between the libraries I wanted to use and the agent. So, I've been building an open-source alternative called [agex](https://ashenfad.github.io/agex/). The core idea is to bypass the tool-layer and give agents direct, sandboxed access to Python libraries. The agent "thinks-in-code" and can compose functions, classes, and methods from the modules you give it. The project is somewhere in-between toy and production-ready, but I'd love feedback from folks interested in kicking the tires. It's closest cousin is Huggingface's smol-agents, but again, with an emphasis on library integration. Some links: * A [video of a NiceGUI integration](https://www.youtube.com/watch?v=-LaY_QBfkf8) where the agent creates UI on the fly * A [notebook](https://ashenfad.github.io/agex/examples/routing/) for routing via OSMnx & Folium * A [small benchmark](https://github.com/ashenfad/agex/blob/main/benchmarks/excel_analysis_bench.py) testing the qwen3 family of models (qwen3-coder works well) * Project repo: [https://github.com/ashenfad/agex](https://github.com/ashenfad/agex) * Repo for NiceGUI demos: [https://github.com/ashenfad/agex-ui](https://github.com/ashenfad/agex-ui) Thanks!
2025-09-11T18:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1neh91h/python_agent_framework_focused_on_library/
Impressive-Glass-523
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neh91h
false
null
t3_1neh91h
/r/LocalLLaMA/comments/1neh91h/python_agent_framework_focused_on_library/
false
false
self
7
null
Qwen3-Next-80B-A3B
1
There's a thinking and an instruct version, no hybrid models as they said :)
2025-09-11T18:46:52
https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d
k-en
huggingface.co
1970-01-01T00:00:00
0
{}
1neh0xk
false
null
t3_1neh0xk
/r/LocalLLaMA/comments/1neh0xk/qwen3next80ba3b/
false
false
default
1
null
Qwen3 Next
1
[removed]
2025-09-11T18:46:24
https://i.redd.it/bor6km2wzkof1.jpeg
Impressive_Half_2819
i.redd.it
1970-01-01T00:00:00
0
{}
1neh0ie
false
null
t3_1neh0ie
/r/LocalLLaMA/comments/1neh0ie/qwen3_next/
false
false
default
1
{'enabled': True, 'images': [{'id': 'bor6km2wzkof1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?width=108&crop=smart&auto=webp&s=ce4d4947ee61a625c04d8b18cf0c57c8b98f9e41', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?width=216&crop=smart&auto=webp&s=f0a86425645effd1c1fd1fef180104fb81042c0f', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?width=320&crop=smart&auto=webp&s=c78bebd1c30850c98b3f0c636ae80fea95b1cde7', 'width': 320}, {'height': 682, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?width=640&crop=smart&auto=webp&s=49abb9fc2752b75382179b47dc5d0b0742ef5ac6', 'width': 640}, {'height': 1023, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?width=960&crop=smart&auto=webp&s=08710a19827f1447214958b2ec8c07d24adcacda', 'width': 960}, {'height': 1151, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?width=1080&crop=smart&auto=webp&s=a210a70b5278b5413c96dd5385fb597bcd5b059b', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/bor6km2wzkof1.jpeg?auto=webp&s=c9261a69d410c7be890ff2f805f5c65257a0acb0', 'width': 1501}, 'variants': {}}]}
Qwen3-Next-80B-A3B
1
[removed]
2025-09-11T18:45:00
https://i.redd.it/5h7yt24nzkof1.jpeg
Impressive_Half_2819
i.redd.it
1970-01-01T00:00:00
0
{}
1negz85
false
null
t3_1negz85
/r/LocalLLaMA/comments/1negz85/qwen3next80ba3b/
false
false
default
1
{'enabled': True, 'images': [{'id': '5h7yt24nzkof1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?width=108&crop=smart&auto=webp&s=41eead895ad5f019af5122167914f210a7539e6d', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?width=216&crop=smart&auto=webp&s=8ea263ff84b0ec57e08cb8c7bff8298daf972611', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?width=320&crop=smart&auto=webp&s=108b3a79e851bec753eee8e85fa97c44006dd6e3', 'width': 320}, {'height': 682, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?width=640&crop=smart&auto=webp&s=323bdcba6fe089d24bd1803e17e15c3b1ff08a4f', 'width': 640}, {'height': 1023, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?width=960&crop=smart&auto=webp&s=b0e692883dc70852ddc47d77a29407cc1db8b172', 'width': 960}, {'height': 1151, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?width=1080&crop=smart&auto=webp&s=c40755d9dfe916deb75920454b162877f547c9fb', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/5h7yt24nzkof1.jpeg?auto=webp&s=6a86f652ecb640b4885f6e6dd39ddb83bcd755b9', 'width': 1501}, 'variants': {}}]}
How dangerous is Chinese AI?
0
If you were to use deepseek or qwen and it realized you were coding for any enemy of the CCP do you think it would secretly sabotage the code base? How dangerous is using Chinese AI for people the Chinese consider dangerous?
2025-09-11T18:31:52
https://www.reddit.com/r/LocalLLaMA/comments/1negn07/how_dangerous_is_chinese_ai/
FanFabulous5606
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1negn07
false
null
t3_1negn07
/r/LocalLLaMA/comments/1negn07/how_dangerous_is_chinese_ai/
false
false
self
0
null
Qwen3-Next - a Qwen Collection
23
2025-09-11T18:19:36
https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1negbax
false
null
t3_1negbax
/r/LocalLLaMA/comments/1negbax/qwen3next_a_qwen_collection/
false
false
default
23
null
We'll give GPU time for interesting Open Source Model training projects
9
If you are a research lab wanting to do research on LLMs, or a small startup trying to beat the tech giants with frugal AI models, we want to help. [Kalavai](https://github.com/kalavai-net/kalavai-client) is offering GPU and other resources to interesting projects that want to push the envelope but are struggling to fund computing resources. [Apply here](https://form.typeform.com/to/C1SFzEdK) Feel free to engage with us on our [discord channel](https://discord.gg/YN6ThTJKbM)
2025-09-11T18:16:16
https://www.reddit.com/r/LocalLLaMA/comments/1neg88b/well_give_gpu_time_for_interesting_open_source/
Good-Coconut3907
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neg88b
false
null
t3_1neg88b
/r/LocalLLaMA/comments/1neg88b/well_give_gpu_time_for_interesting_open_source/
false
false
self
9
{'enabled': False, 'images': [{'id': 'afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?width=108&crop=smart&auto=webp&s=9657053c33f9acb3d91d645cd2bc75bcf782b8f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?width=216&crop=smart&auto=webp&s=b271b6ed7fe846d7382efcf4581004ba888ab11c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?width=320&crop=smart&auto=webp&s=724776d5e8789f1e3048d0222ba4a7cbf95d68e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?width=640&crop=smart&auto=webp&s=beda6b446e14552c140a2bd972a825e9fd821a6f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?width=960&crop=smart&auto=webp&s=3e3751fefc2eb42f5385af019e8bef9dda6a1cba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?width=1080&crop=smart&auto=webp&s=779cb8bc3d3162d9941266e2eb220f04caa68921', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/afGj3LDRI8Hk1Gb11Baz1oEzC3m17qyuTsYTbsU2kl4.png?auto=webp&s=f5213dcdb2ee915568e58bd8e73a384e8ecbe26b', 'width': 1200}, 'variants': {}}]}
Alibaba's homegrown chips are now competitive with Nvidia H20
210
2025-09-11T18:05:29
https://www.reuters.com/world/china/alibaba-baidu-begin-using-own-chips-train-ai-models-information-reports-2025-09-11/
tengo_harambe
reuters.com
1970-01-01T00:00:00
0
{}
1nefy3g
false
null
t3_1nefy3g
/r/LocalLLaMA/comments/1nefy3g/alibabas_homegrown_chips_are_now_competitive_with/
false
false
https://external-preview…2e5e1c616515d0f5
210
{'enabled': False, 'images': [{'id': '3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?width=108&crop=smart&auto=webp&s=cfb462345e4c83cabf999cf940995fb36a018de5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?width=216&crop=smart&auto=webp&s=3990353d9f55b2e07208f09109c25676f0db405c', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?width=320&crop=smart&auto=webp&s=9745516ebedc47310a3e14aed22f5f46139f7ec9', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?width=640&crop=smart&auto=webp&s=8a1612f97051237e10ed9d34a74d93f77252d00f', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?width=960&crop=smart&auto=webp&s=4f8d1da7c3c9c41543541a7a5438e2de44b74ebd', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?width=1080&crop=smart&auto=webp&s=68aadd8215a45efe0be22b6e993c2188884d55c2', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/3GSznXQc2v2aM2IifWcymKhFoVUIu_yfZrGgwEMCPBw.jpeg?auto=webp&s=3e4707eb59e149ca43a3fa0909f5d8cd4113ae6e', 'width': 1920}, 'variants': {}}]}
Qwen Next!
30
[https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking) [https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)
2025-09-11T18:05:17
https://www.reddit.com/gallery/1nefxxb
krzonkalla
reddit.com
1970-01-01T00:00:00
0
{}
1nefxxb
true
null
t3_1nefxxb
/r/LocalLLaMA/comments/1nefxxb/qwen_next/
false
false
https://b.thumbs.redditm…u9ZNabvLUKfo.jpg
30
null
gpt-oss:20b full 131k context bellow 16 Gb vram ?
8
Hi, I am quite surprised to see the full context gpt-oss:20b requiring <16Gb. I am using the latest ollama 0.11.10 in a 3090. This drop of required vram came first when updating ollama from 0.11.06? to the most recent one. The update also boosted the tk/s from ~60 tks to ~110 tks with short context. With the full context it performs at 1000 tks for PP and 40 tks for generation. I havent seen this behaviour with any other model. Do you know about other models that require so little vram at >100k context lenghts ?
2025-09-11T18:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1nefx2l/gptoss20b_full_131k_context_bellow_16_gb_vram/
nexUser78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nefx2l
false
null
t3_1nefx2l
/r/LocalLLaMA/comments/1nefx2l/gptoss20b_full_131k_context_bellow_16_gb_vram/
false
false
self
8
null
VoltageGPU
1
[removed]
2025-09-11T18:02:04
https://www.reddit.com/r/LocalLLaMA/comments/1nefuw8/voltagegpu/
VoltageGPU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nefuw8
false
null
t3_1nefuw8
/r/LocalLLaMA/comments/1nefuw8/voltagegpu/
false
false
self
1
null
Qwen/Qwen3-Next-80B-A3B-Instruct
61
2025-09-11T18:00:25
https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct
Pro-editor-1105
huggingface.co
1970-01-01T00:00:00
0
{}
1neft7s
false
null
t3_1neft7s
/r/LocalLLaMA/comments/1neft7s/qwenqwen3next80ba3binstruct/
false
false
default
61
null
Qwen/Qwen3-Next-80B-A3B-Thinking · Hugging Face
7
2025-09-11T17:58:12
https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking
Pro-editor-1105
huggingface.co
1970-01-01T00:00:00
0
{}
1nefr1l
false
null
t3_1nefr1l
/r/LocalLLaMA/comments/1nefr1l/qwenqwen3next80ba3bthinking_hugging_face/
false
false
default
7
null
Qwen/Qwen3-Next-80B-A3B-Thinking · Hugging Face
91
2025-09-11T17:55:29
https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1nefohm
false
null
t3_1nefohm
/r/LocalLLaMA/comments/1nefohm/qwenqwen3next80ba3bthinking_hugging_face/
false
false
default
91
null
Qwen released Qwen3-Next-80B-A3B — the FUTURE of efficient LLMs is here!
1,023
🚀 Introducing Qwen3-Next-80B-A3B — the FUTURE of efficient LLMs is here! 🔹 80B params, but only 3B activated per token → 10x cheaper training, 10x faster inference than Qwen3-32B.(esp. @ 32K+ context!) 🔹Hybrid Architecture: Gated DeltaNet + Gated Attention → best of speed & recall 🔹 Ultra-sparse MoE: 512 experts, 10 routed + 1 shared 🔹 Multi-Token Prediction → turbo-charged speculative decoding 🔹 Beats Qwen3-32B in perf, rivals Qwen3-235B in reasoning & long-context 🧠 Qwen3-Next-80B-A3B-Instruct approaches our 235B flagship. 🧠 Qwen3-Next-80B-A3B-Thinking outperforms Gemini-2.5-Flash-Thinking. Try it now: chat.qwen.ai Blog: https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9cb7d27cd&from=research.latest-advancements-list Huggingface: https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d
2025-09-11T17:53:48
https://www.reddit.com/gallery/1nefmzr
ResearchCrafty1804
reddit.com
1970-01-01T00:00:00
0
{}
1nefmzr
false
null
t3_1nefmzr
/r/LocalLLaMA/comments/1nefmzr/qwen_released_qwen3next80ba3b_the_future_of/
false
false
https://a.thumbs.redditm…O8Y3sUSxIA28.jpg
1,023
null
Local LLM
0
Best open source LLM on hugging face (uncensured) please ?
2025-09-11T17:53:06
https://www.reddit.com/r/LocalLLaMA/comments/1nefmdi/local_llm/
Tobi_inthenight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nefmdi
false
null
t3_1nefmdi
/r/LocalLLaMA/comments/1nefmdi/local_llm/
false
false
self
0
null
🚀 Introducing Qwen3-Next-80B-A3B — the FUTURE of efficient LLMs is here!
3
https://x.com/Alibaba_Qwen/status/1966197643904000262
2025-09-11T17:51:41
https://www.reddit.com/r/LocalLLaMA/comments/1nefkzw/introducing_qwen3next80ba3b_the_future_of/
sleepy_roger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nefkzw
false
null
t3_1nefkzw
/r/LocalLLaMA/comments/1nefkzw/introducing_qwen3next80ba3b_the_future_of/
false
false
self
3
null
Per-message MoE routing
1
Why isn't this a thing? It seems like a rather sparse model with lots of expert overlap would lend well to an MoE routing scheme that uses one collection of experts per message. This way you could even justify loading the experts onto GPU for each message since you're only switching once
2025-09-11T17:50:48
https://www.reddit.com/r/LocalLLaMA/comments/1nefk53/permessage_moe_routing/
atineiatte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nefk53
false
null
t3_1nefk53
/r/LocalLLaMA/comments/1nefk53/permessage_moe_routing/
false
false
self
1
null
Please help me understand if this is a worthwhile problem to pursue.
2
**Problem statement:** I work for a company that has access to a lot of pdf test reports (technical, not medical). They contain the same information and fields but each test lab does it slightly differently (formatting and layout and one test lab even has dual language - English and German). My objective is to reliably extract information from these test reports and add them to a csv or database. The problem is regular regex extraction does not work so well because there are few random characters or extra/missing periods. is there a way to use a local LLM to systematically extract the information? **Constraints:** Must run on an i7 (12th Gen) laptop with 32 GBs of ram and no GPU. I dont need it to be particularly fast but rather just reliable. Can only run on the company laptop and no connection to the internet. I'm not a very good programmer, but understand software to some extent. I've 'vibe coded' some versions that work to some extent but it's not so great. Either it returns the wrong answer or completely misses the field. **Question**: Given that local LLMs need a lot of compute and edge device LLMs may not be up to par. Is this problem statement solvable with current models and technology? What would be a viable approach? I'd appreciate any insight
2025-09-11T17:40:48
https://www.reddit.com/r/LocalLLaMA/comments/1nefah9/please_help_me_understand_if_this_is_a_worthwhile/
reedrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nefah9
false
null
t3_1nefah9
/r/LocalLLaMA/comments/1nefah9/please_help_me_understand_if_this_is_a_worthwhile/
false
false
self
2
null
Qwen3-next “technical” blog is up
218
Here: https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9cb7d27cd&from=research.latest-advancements-list
2025-09-11T17:27:40
https://www.reddit.com/r/LocalLLaMA/comments/1neey2c/qwen3next_technical_blog_is_up/
Alarming-Ad8154
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neey2c
false
null
t3_1neey2c
/r/LocalLLaMA/comments/1neey2c/qwen3next_technical_blog_is_up/
false
false
self
218
null
Built Reddit like community with AutoBE and AutoView (gpr-4.1-mini and qwen3-235b-a22b)
5
As we promised in our [previous article](https://www.reddit.com/r/LocalLLaMA/comments/1n94n2x/succeeded_to_build_fulllevel_backend_application/), AutoBE has successfully generated more complex backend applications rather than the previous todo application with `qwen3-235b-a22b`. Also, `gpt-4.1-mini` can generate enterprise-level applications without compilation errors. It wasn't easy to optimize AutoBE for `qwen3-235b-a22b`, but whenever the success rate gets higher with that model, it gets us really excited. Generating fully completed backend applications with an open-source AI model and open-source AI chatbot makes us think a lot. Next time (maybe next month?), we'll come back with much more complex use-cases like e-commerce, achieving 100% compilation success rate with the `qwen3-235b-a22b` model. If you want to have the same exciting experience with us, you can freely use both AutoBE and `qwen3-235b-a22b` in our hackathon contest that starts tomorrow. - Github Repository: https://github.com/wrtnlabs/autobe - Hackathon Contest Application: https://autobe.dev/articles/autobe-hackathon-20250912.html - Hackathon User Manual: https://autobe.dev/tutorial/hackathon
2025-09-11T17:16:25
https://v.redd.it/royvm9x7hkof1
jhnam88
/r/LocalLLaMA/comments/1neen71/built_reddit_like_community_with_autobe_and/
1970-01-01T00:00:00
0
{}
1neen71
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/royvm9x7hkof1/DASHPlaylist.mpd?a=1760332597%2COWJiYzg4MjkzNDE5OWQzZmNiODVkYjFlYmZkNjAyNTI2MmI2ZTdlZWExMTZhOTEzMjc2M2QyMTU2MjU2NDEzMw%3D%3D&v=1&f=sd', 'duration': 728, 'fallback_url': 'https://v.redd.it/royvm9x7hkof1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/royvm9x7hkof1/HLSPlaylist.m3u8?a=1760332597%2CYmQyY2ViOGYyZTViM2MzNzdhNmU2MDU4NTU1ZTMxYWE4OGQxNWZmYTY3NmZlNzI2ODY3MmJmNWFmZTk4OTZhOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/royvm9x7hkof1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1neen71
/r/LocalLLaMA/comments/1neen71/built_reddit_like_community_with_autobe_and/
false
false
default
5
{'enabled': False, 'images': [{'id': 'MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?width=108&crop=smart&format=pjpg&auto=webp&s=b00f93be0f10c489b92a4d64dd2aebffd77ec021', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?width=216&crop=smart&format=pjpg&auto=webp&s=2cf6b933c41f7789098b0a53b030861445f9f572', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?width=320&crop=smart&format=pjpg&auto=webp&s=7fd12bd012f25022b682192a115388e9cd0100e3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a0ac826f89de8f5da6c5ff94cfbfdb53d8d4fca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?width=960&crop=smart&format=pjpg&auto=webp&s=39f77839131e7c0133c06843e86cfd0eaf0485ed', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6a0387fefc56472b499c70bdcb17ab78c0ad1671', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnI0ZjE5eDdoa29mMY2ANMuK4FpvnevaCMGXl7PsDUKnDaD2ZalCe1yJ-lhb.png?format=pjpg&auto=webp&s=cdacfc4a8b94b00aaf01036c756a99e9dacd9edf', 'width': 1920}, 'variants': {}}]}
KV cache f32 - Are there any benefits?
8
The default value for the KV cache in llamacpp is f16. I've noticed that reducing the precision negatively affects the model's ability to remember facts, for example, in conversations or roleplay. Does increasing the precision to f32 have the opposite effect? ​​I recently tested Mistral 3.2 Q8 with a KV cache of f32 and I'm not sure. The model was obviously much slower, and it surprised me interestingly a few times (but whether that was due to f32 or just the random seed—I don't know). I tried to find some tests, but I can't find anything meaningful. Does f32 positively affect the stability/size of the context window?
2025-09-11T17:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1neelit/kv_cache_f32_are_there_any_benefits/
Daniokenon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neelit
false
null
t3_1neelit
/r/LocalLLaMA/comments/1neelit/kv_cache_f32_are_there_any_benefits/
false
false
self
8
null
Which card should I buy: 3060 12GB or 3070 8GB?
0
In same price I am getting new 3060 and used 3070, which should I get. My main purpose is running local llm but I also do gaming too.
2025-09-11T17:13:49
https://www.reddit.com/r/LocalLLaMA/comments/1neekpx/which_card_should_i_buy_3060_12gb_or_3070_8gb/
ibn_e_khalid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1neekpx
false
null
t3_1neekpx
/r/LocalLLaMA/comments/1neekpx/which_card_should_i_buy_3060_12gb_or_3070_8gb/
false
false
self
0
null
Open Source K2 Think, 32B, 2000t/s
3
"By delivering over 2,000 tokens per second for Scout – more than 30 times faster than closed models like ChatGPT or Anthropic, Cerebras is helping developers everywhere to move faster, go deeper, and build better than ever before" I tried it and, yep, clocked in at 1900 tokens/s So, this is really quite something? Any caveats?
2025-09-11T17:02:28
https://www.cerebras.ai/k2think
One_5549
cerebras.ai
1970-01-01T00:00:00
0
{}
1nee9t2
false
null
t3_1nee9t2
/r/LocalLLaMA/comments/1nee9t2/open_source_k2_think_32b_2000ts/
false
false
default
3
null
Why don't scientists trust atoms?
0
>!Because they make up everything!!<
2025-09-11T17:01:43
https://www.reddit.com/r/LocalLLaMA/comments/1nee945/why_dont_scientists_trust_atoms/
numinouslymusing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nee945
false
null
t3_1nee945
/r/LocalLLaMA/comments/1nee945/why_dont_scientists_trust_atoms/
false
false
self
0
null
LLM Latency Leaderboard
1
[removed]
2025-09-11T16:57:24
https://www.reddit.com/r/LocalLLaMA/comments/1nee4pu/llm_latency_leaderboard/
Cupp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nee4pu
false
null
t3_1nee4pu
/r/LocalLLaMA/comments/1nee4pu/llm_latency_leaderboard/
false
false
self
1
null
LLM Latency Leaderboard
1
[removed]
2025-09-11T16:56:17
[deleted]
1970-01-01T00:00:00
0
{}
1nee3oz
false
null
t3_1nee3oz
/r/LocalLLaMA/comments/1nee3oz/llm_latency_leaderboard/
false
false
default
1
null
Local AI App 2025 Comparison according to chatgpt.
0
Hi LocalLLama. I was playing with chatgpt5 and did a comparison among the best local apps out there right now. I notice in first that is highly biased and inaccurate. Even though is missing information it should be better at getting it from the web. This is also a method to understand how good is chatgpt5 in getting accurate information from the web. It got my attention that it is so inconsiderate with Kobold which in my opinion is feature rich. I had to work it out to point out all the features of HugstonOne and I am not sure about the other apps features. I repeat the information about the other apps maybe be inaccurate and is all according to chatgpt5 pro. It is time to have a contest (as I am open for whatever challenge) so we can establish the winner for 2025 and will be good to do that every year. Below continuation of assessment of chatgpt5 pro. # ★ Rankings (overall strength by category) * **HugstonOne** ★★★★☆ – unmatched on privacy, offline control, context size, coding features; Windows-only + missing gen/audio keep it from 5★. * **LM Studio** ★★★★☆ – polished, multi-platform, great GPU/iGPU; privacy weaker, no tabs/editor. * **Ollama** ★★★★☆ – strong API + ecosystem; privacy weaker, no sessions/tabs, no code tools. * **Open WebUI** ★★★☆☆ – flexible web UI; but backend-dependent, privacy weaker. * **Jan** ★★★☆☆ – clean OSS app, privacy-friendly; fewer pro features, still maturing. * **oobabooga** ★★★★☆ – extremely flexible, many backends; rough edges, privacy weaker. * **KoboldCpp** ★★★☆☆ – lightweight, RP-friendly; narrower scope, fewer pro features. * **AnythingLLM** ★★★☆☆ – strong for RAG/workspaces; heavier stack, less coding focus. * **LocalAI** ★★★☆☆ – API-first freedom; not a desktop app, UX bare. * **PrivateGPT** ★★★☆☆ – simple, private doc-Q\&A; narrow use-case, not general LLM. 📌 **Fair verdict:** * If **privacy + huge context + coding workflow** are #1 → **HugstonOne is top**. * If **ecosystem + multi-platform polish** are #1 → **LM Studio / Ollama** still hold. * No one else right now combines HugstonOne’s **offline guarantees + session/tabs + code preview/editor** in one package.
2025-09-11T16:56:08
https://i.redd.it/3nd59bbfekof1.png
Trilogix
i.redd.it
1970-01-01T00:00:00
0
{}
1nee3jd
false
null
t3_1nee3jd
/r/LocalLLaMA/comments/1nee3jd/local_ai_app_2025_comparison_according_to_chatgpt/
false
false
default
0
{'enabled': True, 'images': [{'id': '3nd59bbfekof1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/3nd59bbfekof1.png?width=108&crop=smart&auto=webp&s=cbf997af7ba71d5bbbdfe30a2bedf74474fa3dcd', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/3nd59bbfekof1.png?width=216&crop=smart&auto=webp&s=132b56a0864112a95f6c32759b0ba868e83dd719', 'width': 216}, {'height': 262, 'url': 'https://preview.redd.it/3nd59bbfekof1.png?width=320&crop=smart&auto=webp&s=17909394e178694512a5655df431f83ea902d26f', 'width': 320}, {'height': 525, 'url': 'https://preview.redd.it/3nd59bbfekof1.png?width=640&crop=smart&auto=webp&s=417ca99220863debf58c5db21dd5c114ec57b4d6', 'width': 640}, {'height': 787, 'url': 'https://preview.redd.it/3nd59bbfekof1.png?width=960&crop=smart&auto=webp&s=52b4d84437163a3ccd98046af9b51a55981b0ae7', 'width': 960}], 'source': {'height': 883, 'url': 'https://preview.redd.it/3nd59bbfekof1.png?auto=webp&s=23c51f0626fc369b7396673cac0cf354cdaaaed8', 'width': 1076}, 'variants': {}}]}
LLM Latency Leaderboards
0
Benchmarked every LLM offered from the top providers for some projects I was working on. This was **not run locally (using serverless cloud)** but I thought it was relevant to this subreddit because the **open-source models are way faster than proprietary**, and these results should be applicable locally. Looks like: * **Winner:** groq/allam-2-7b is the fastest available cloud model (\~100ms TTFT) * **Close runner ups:** llama-4-maverick-17b-128e-instruct, glm-4p5-air, kimi-k2-instruct, qwen3-32b * The proprietary models (OpenAI, Anthropic, Google) are embarrassingly slow (>1s) [Full leaderboard here (CC-BY-SA 4.0)](https://llm.orgsoft.org)
2025-09-11T16:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1nee21a/llm_latency_leaderboards/
Cupp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nee21a
false
null
t3_1nee21a
/r/LocalLLaMA/comments/1nee21a/llm_latency_leaderboards/
false
false
self
0
null
Runtime intelligence in games
5
Any game devs here using AI in games rather than for game development? I’m seeing a lot of AI tools for things like coding and asset generation but not as much exploration of net new game mechanics. AI NPCs being the most obvious but also AI-powered spell systems, world events, generated histories, etc. Wondering what folks here think of the technology and if there are any barriers in your mind for getting games to adopt it. I’ve tried posting in r/gamedev but it’s pretty brutal over there. For me, I’ve been exploring the space for a while and have come to believe small local models are the only viable way for AI to run in-game. Cloud APIs just don’t make sense from an economic and design perspective. And small models are quite capable already if you finetune them on well curated data. I've been finetuning small models for the past few months with decent results. If anyone is interested in trying out the tool I built to finetune models for their game let me know
2025-09-11T16:46:38
https://www.reddit.com/r/LocalLLaMA/comments/1nedulw/runtime_intelligence_in_games/
formicidfighter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nedulw
false
null
t3_1nedulw
/r/LocalLLaMA/comments/1nedulw/runtime_intelligence_in_games/
false
false
self
5
null
Qwen 3 next
0
2025-09-11T16:42:09
https://i.redd.it/biqw6z1qdkof1.png
Namra_7
i.redd.it
1970-01-01T00:00:00
0
{}
1nedq9a
false
null
t3_1nedq9a
/r/LocalLLaMA/comments/1nedq9a/qwen_3_next/
false
false
default
0
{'enabled': True, 'images': [{'id': 'biqw6z1qdkof1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?width=108&crop=smart&auto=webp&s=f23d40a83bc57366e735619ef8b285069a596727', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?width=216&crop=smart&auto=webp&s=0bd127aea75c74424b2cbd5655d9434bff4f6b10', 'width': 216}, {'height': 266, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?width=320&crop=smart&auto=webp&s=e56efbd5f0f594170ae253cd9a3128b76b61de5d', 'width': 320}, {'height': 533, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?width=640&crop=smart&auto=webp&s=e491b419fcbb690077282acd937986f7128779d4', 'width': 640}, {'height': 800, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?width=960&crop=smart&auto=webp&s=a44f61e5855b3b0fb107e3f1f74825cdb392b2d2', 'width': 960}, {'height': 901, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?width=1080&crop=smart&auto=webp&s=0280d4cca398839d186cfa744ec3895b0e81789a', 'width': 1080}], 'source': {'height': 901, 'url': 'https://preview.redd.it/biqw6z1qdkof1.png?auto=webp&s=79e7d14d38518db7123d9ebc98a8cfbf17dfa754', 'width': 1080}, 'variants': {}}]}
We just released the world's first 70B intermediate checkpoints. Yes, Apache 2.0. Yes, we're still broke.
1,394
Remember when y'all roasted us about the license? We listened. Just dropped what we think is a world first: **70B model intermediate checkpoints**. Not just the final model - the entire training journey. Previous releases (SmolLM, OLMo) maxed out at <14B. Everything is Apache 2.0 now (no gated access): * 70B, 7B, 1.9B, 0.5B models + all their intermediate checkpoints and base models * First Korean 70B ever (but secretly optimized for English lol) * Actually open-source, not just open-weights BS [https://huggingface.co/trillionlabs/Tri-70B-Intermediate-Checkpoints](https://huggingface.co/trillionlabs/Tri-70B-Intermediate-Checkpoints) We're a 1-year-old startup with pocket change competing against companies with infinite money glitch. Not the best model, but probably the most transparent 70B training ever shared.
2025-09-11T16:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1nedq3i/we_just_released_the_worlds_first_70b/
jshin49
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nedq3i
false
null
t3_1nedq3i
/r/LocalLLaMA/comments/1nedq3i/we_just_released_the_worlds_first_70b/
false
false
self
1,394
{'enabled': False, 'images': [{'id': 'p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=108&crop=smart&auto=webp&s=f82e12a7100f5525ff92abb48a71ceca82cc9fa4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=216&crop=smart&auto=webp&s=f94c6d35e30530f6958ada8e03b0590b71c8bc7b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=320&crop=smart&auto=webp&s=f7e3092f9b569bf7f43e736d158bf2d3569efdcf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=640&crop=smart&auto=webp&s=e302d60a37bcd5015e5ac5b9f7f085c75865a940', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=960&crop=smart&auto=webp&s=97eb2cdabfc82b025a440470cf666cd781cb1de6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=1080&crop=smart&auto=webp&s=d420956279529fdc02f53765b7f757b0211fe62d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?auto=webp&s=43aa36d109941740d1766f2c49e22815660c965e', 'width': 1200}, 'variants': {}}]}
We just released the world's first 70B intermediate checkpoints. Yes, Apache 2.0. Yes, we're still broke.
1
Remember when y'all roasted us about the license? We listened. Just dropped what we think is a world first: **70B model intermediate checkpoints**. Not just the final model - the entire training journey. Previous releases (SmolLM-3, OLMo-2) maxed out at 14B. Everything's Apache 2.0 now: * 70B, 7B, 1.9B, 0.5B models + all their intermediate checkpoints * First Korean 70B ever (but it's secretly optimized for English lol) * Actually open-source, not just open-weights BS Here's them checkpoints: [https://huggingface.co/trillionlabs/Tri-70B-Intermediate-Checkpoints](https://huggingface.co/trillionlabs/Tri-70B-Intermediate-Checkpoints) We're a 1-year-old startup with pocket change competing against companies with infinite money glitch. Not the best model, but probably the most transparent 70B training ever shared.
2025-09-11T16:30:57
https://www.reddit.com/r/LocalLLaMA/comments/1nedfmf/we_just_released_the_worlds_first_70b/
jshin49
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nedfmf
false
null
t3_1nedfmf
/r/LocalLLaMA/comments/1nedfmf/we_just_released_the_worlds_first_70b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=108&crop=smart&auto=webp&s=f82e12a7100f5525ff92abb48a71ceca82cc9fa4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=216&crop=smart&auto=webp&s=f94c6d35e30530f6958ada8e03b0590b71c8bc7b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=320&crop=smart&auto=webp&s=f7e3092f9b569bf7f43e736d158bf2d3569efdcf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=640&crop=smart&auto=webp&s=e302d60a37bcd5015e5ac5b9f7f085c75865a940', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=960&crop=smart&auto=webp&s=97eb2cdabfc82b025a440470cf666cd781cb1de6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?width=1080&crop=smart&auto=webp&s=d420956279529fdc02f53765b7f757b0211fe62d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p45gh6qfuoD2e3FvbP4rn2pzQQa7PljWng8yp1-80bw.png?auto=webp&s=43aa36d109941740d1766f2c49e22815660c965e', 'width': 1200}, 'variants': {}}]}
Building RAG systems at enterprise scale (20K+ docs): lessons from 10+ enterprise implementations
314
Been building RAG systems for mid-size enterprise companies in the regulated space (100-1000 employees) for the past year and to be honest, this stuff is way harder than any tutorial makes it seem. Worked with around 10+ clients now - pharma companies, banks, law firms, consulting shops. Thought I'd share what actually matters vs all the basic info you read online. Quick context: most of these companies had 10K-50K+ documents sitting in SharePoint hell or document management systems from 2005. Not clean datasets, not curated knowledge bases - just decades of business documents that somehow need to become searchable. **Document quality detection: the thing nobody talks about** This was honestly the biggest revelation for me. Most tutorials assume your PDFs are perfect. Reality check: enterprise documents are absolute garbage. I had one pharma client with research papers from 1995 that were scanned copies of typewritten pages. OCR barely worked. Mixed in with modern clinical trial reports that are 500+ pages with embedded tables and charts. Try applying the same chunking strategy to both and watch your system return complete nonsense. Spent weeks debugging why certain documents returned terrible results while others worked fine. Finally realized I needed to score document quality before processing: * Clean PDFs (text extraction works perfectly): full hierarchical processing * Decent docs (some OCR artifacts): basic chunking with cleanup * Garbage docs (scanned handwritten notes): simple fixed chunks + manual review flags Built a simple scoring system looking at text extraction quality, OCR artifacts, formatting consistency. Routes documents to different processing pipelines based on score. This single change fixed more retrieval issues than any embedding model upgrade. **Why fixed-size chunking is mostly wrong** Every tutorial: "just chunk everything into 512 tokens with overlap!" Reality: documents have structure. A research paper's methodology section is different from its conclusion. Financial reports have executive summaries vs detailed tables. When you ignore structure, you get chunks that cut off mid-sentence or combine unrelated concepts. Had to build hierarchical chunking that preserves document structure: * Document level (title, authors, date, type) * Section level (Abstract, Methods, Results) * Paragraph level (200-400 tokens) * Sentence level for precision queries The key insight: query complexity should determine retrieval level. Broad questions stay at paragraph level. Precise stuff like "what was the exact dosage in Table 3?" needs sentence-level precision. I use simple keyword detection - words like "exact", "specific", "table" trigger precision mode. If confidence is low, system automatically drills down to more precise chunks. **Metadata architecture matters more than your embedding model** This is where I spent 40% of my development time and it had the highest ROI of anything I built. Most people treat metadata as an afterthought. But enterprise queries are crazy contextual. A pharma researcher asking about "pediatric studies" needs completely different documents than someone asking about "adult populations." Built domain-specific metadata schemas: **For pharma docs:** * Document type (research paper, regulatory doc, clinical trial) * Drug classifications * Patient demographics (pediatric, adult, geriatric) * Regulatory categories (FDA, EMA) * Therapeutic areas (cardiology, oncology) **For financial docs:** * Time periods (Q1 2023, FY 2022) * Financial metrics (revenue, EBITDA) * Business segments * Geographic regions Avoid using LLMs for metadata extraction - they're inconsistent as hell. Simple keyword matching works way better. Query contains "FDA"? Filter for regulatory\_category: "FDA". Mentions "pediatric"? Apply patient population filters. Start with 100-200 core terms per domain, expand based on queries that don't match well. Domain experts are usually happy to help build these lists. **When semantic search fails (spoiler: a lot)** Pure semantic search fails way more than people admit. In specialized domains like pharma and legal, I see 15-20% failure rates, not the 5% everyone assumes. **Main failure modes that drove me crazy:** **Acronym confusion:** "CAR" means "Chimeric Antigen Receptor" in oncology but "Computer Aided Radiology" in imaging papers. Same embedding, completely different meanings. This was a constant headache. **Precise technical queries:** Someone asks "What was the exact dosage in Table 3?" Semantic search finds conceptually similar content but misses the specific table reference. **Cross-reference chains:** Documents reference other documents constantly. Drug A study references Drug B interaction data. Semantic search misses these relationship networks completely. **Solution:** Built hybrid approaches. Graph layer tracks document relationships during processing. After semantic search, system checks if retrieved docs have related documents with better answers. For acronyms, I do context-aware expansion using domain-specific acronym databases. For precise queries, keyword triggers switch to rule-based retrieval for specific data points. # Why I went with open source models (Qwen specifically) Most people assume GPT-4o or o3-mini are always better. But enterprise clients have weird constraints: * **Cost:** API costs explode with 50K+ documents and thousands of daily queries * **Data sovereignty:** Pharma and finance can't send sensitive data to external APIs * **Domain terminology:** General models hallucinate on specialized terms they weren't trained on Qwen QWQ-32B ended up working surprisingly well after domain-specific fine-tuning: * 85% cheaper than GPT-4o for high-volume processing * Everything stays on client infrastructure * Could fine-tune on medical/financial terminology * Consistent response times without API rate limits Fine-tuning approach was straightforward - supervised training with domain Q&A pairs. Created datasets like "What are contraindications for Drug X?" paired with actual FDA guideline answers. Basic supervised fine-tuning worked better than complex stuff like RAFT. Key was having clean training data. **Table processing: the hidden nightmare** Enterprise docs are full of complex tables - financial models, clinical trial data, compliance matrices. Standard RAG either ignores tables or extracts them as unstructured text, losing all the relationships. Tables contain some of the most critical information. Financial analysts need exact numbers from specific quarters. Researchers need dosage info from clinical tables. If you can't handle tabular data, you're missing half the value. **My approach:** * Treat tables as separate entities with their own processing pipeline * Use heuristics for table detection (spacing patterns, grid structures) * For simple tables: convert to CSV. For complex tables: preserve hierarchical relationships in metadata * Dual embedding strategy: embed both structured data AND semantic description For the bank project, financial tables were everywhere. Had to track relationships between summary tables and detailed breakdowns too. **Production infrastructure reality check** Tutorials assume unlimited resources and perfect uptime. Production means concurrent users, GPU memory management, consistent response times, uptime guarantees. Most enterprise clients already had GPU infrastructure sitting around - unused compute or other data science workloads. Made on-premise deployment easier than expected. Typically deploy 2-3 models: * Main generation model (Qwen 32B) for complex queries * Lightweight model for metadata extraction * Specialized embedding model Used quantized versions when possible. Qwen QWQ-32B quantized to 4-bit only needed 24GB VRAM but maintained quality. Could run on single RTX 4090, though A100s better for concurrent users. Biggest challenge isn't model quality - it's preventing resource contention when multiple users hit the system simultaneously. Use semaphores to limit concurrent model calls and proper queue management. # Key lessons that actually matter **1. Document quality detection first:** You cannot process all enterprise docs the same way. Build quality assessment before anything else. **2. Metadata > embeddings:** Poor metadata means poor retrieval regardless of how good your vectors are. Spend the time on domain-specific schemas. **3. Hybrid retrieval is mandatory:** Pure semantic search fails too often in specialized domains. Need rule-based fallbacks and document relationship mapping. **4. Tables are critical:** If you can't handle tabular data properly, you're missing huge chunks of enterprise value. **5. Infrastructure determines success:** Clients care more about reliability than fancy features. Resource management and uptime matter more than model sophistication. **The real talk** Enterprise RAG is way more engineering than ML. Most failures aren't from bad models - they're from underestimating the document processing challenges, metadata complexity, and production infrastructure needs. The demand is honestly crazy right now. Every company with substantial document repositories needs these systems, but most have no idea how complex it gets with real-world documents. Anyway, this stuff is way harder than tutorials make it seem. The edge cases with enterprise documents will make you want to throw your laptop out the window. But when it works, the ROI is pretty impressive - seen teams cut document search from hours to minutes. Posted this in LLMDevs a few days ago and many people found the technical breakdown helpful, so wanted to share here too for the broader AI community! Happy to answer questions if anyone's hitting similar walls with their implementations.
2025-09-11T16:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1ned2ai/building_rag_systems_at_enterprise_scale_20k_docs/
Low_Acanthisitta7686
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ned2ai
false
null
t3_1ned2ai
/r/LocalLLaMA/comments/1ned2ai/building_rag_systems_at_enterprise_scale_20k_docs/
false
false
self
314
null
Looking for individuals who want to work on an AI project
8
I'm in to local llms (go figure) at the moment. Specifically building a system that uses memory (vector db and knowledge graphs) and multi modal inputs to be as human like as possible. Current stack I mess around with is \- Qwen 7b (LLM) \- Moondream (VLM) \- Whisper (STT)/Silero(VAD) \-FAISS (db search) \-MiDAS (vision/depth) \-YOLO (object detection) \-SAM2 (object segmentation) \-kokoro (TTS) \-MxBai (text embed) \-BeIT (img embed) \-vggish (audio embed) Putting this all here because I am looking for a few people interested in building something that basically brings everything together to have a pretty bad ass AI at home. I realize there are frameworks and other tools that already exist that can be used to do this. So I am looking for a group that is also about the fun of just doing it. I was thinking something along the lines of following a brain architecture one person working on audio another on video for example. agree on some 'central nervous' system concept for all of the parts to communicate and then spin up a git repo, do some video meetings once a week and have a go. obv open to other ideas, that's why I am making this post - but the project goal is what's stated above.
2025-09-11T16:02:50
https://www.reddit.com/r/LocalLLaMA/comments/1necpnm/looking_for_individuals_who_want_to_work_on_an_ai/
Strange_Test7665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1necpnm
false
null
t3_1necpnm
/r/LocalLLaMA/comments/1necpnm/looking_for_individuals_who_want_to_work_on_an_ai/
false
false
self
8
null
Qwen
678
2025-09-11T15:06:37
https://i.redd.it/p5fbgn0owjof1.png
Namra_7
i.redd.it
1970-01-01T00:00:00
0
{}
1neba8b
false
null
t3_1neba8b
/r/LocalLLaMA/comments/1neba8b/qwen/
false
false
default
678
{'enabled': True, 'images': [{'id': 'p5fbgn0owjof1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?width=108&crop=smart&auto=webp&s=17a4214797b249615765cd4d48280c77898dbe57', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?width=216&crop=smart&auto=webp&s=1de180175226dd7667455fdfafaa2ce6e7c1d4bf', 'width': 216}, {'height': 427, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?width=320&crop=smart&auto=webp&s=fdc6fb6c68dd7256067b072e03741245f45c28de', 'width': 320}, {'height': 855, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?width=640&crop=smart&auto=webp&s=94561db32b1fca11c0250280863739d22d76e841', 'width': 640}, {'height': 1283, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?width=960&crop=smart&auto=webp&s=60af5f072e824cf91fe01aca4fad6f6080657b45', 'width': 960}, {'height': 1444, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?width=1080&crop=smart&auto=webp&s=d8dcff569fcb8b9202fb271fa4707da6a0a1c5db', 'width': 1080}], 'source': {'height': 1444, 'url': 'https://preview.redd.it/p5fbgn0owjof1.png?auto=webp&s=7453644f1ce88dc49d9691e18a5000f9ce6ec2c5', 'width': 1080}, 'variants': {}}]}
New VS Code release allows extensions to contribute language models to Chat
49
Extensions can now contribute language models that are used in the Chat view. This is the first step (we have a bunch more work to do). But if you have any feedback let me know (vscode pm here). Docs [https://code.visualstudio.com/api/extension-guides/ai/language-model-chat-provider](https://code.visualstudio.com/api/extension-guides/ai/language-model-chat-provider)
2025-09-11T14:58:58
https://code.visualstudio.com/updates/v1_104
isidor_n
code.visualstudio.com
1970-01-01T00:00:00
0
{}
1neb35p
false
null
t3_1neb35p
/r/LocalLLaMA/comments/1neb35p/new_vs_code_release_allows_extensions_to/
false
false
default
49
{'enabled': False, 'images': [{'id': 'ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ.png?width=108&crop=smart&auto=webp&s=165af749f42228b9f3718ee886eb03f79d7d47d6', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ.png?width=216&crop=smart&auto=webp&s=3d37b3c8035d180b850a033dc986ead9c5c25b2f', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ.png?width=320&crop=smart&auto=webp&s=81351906c558e9d0869e647607ebf4aca737756b', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ.png?width=640&crop=smart&auto=webp&s=b8124d3dd9bed2000f7513f33ea18cc6440b75db', 'width': 640}, {'height': 479, 'url': 'https://external-preview.redd.it/ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ.png?width=960&crop=smart&auto=webp&s=e0068a4265be94021afcffe0588df45ffc34e4a4', 'width': 960}], 'source': {'height': 534, 'url': 'https://external-preview.redd.it/ub1r8snDnE0gcJQ1X4KunDUU8G23q7XAyTmHzNrnIvQ.png?auto=webp&s=98815f39c29829d0a8f12995de9c0294265a8e1d', 'width': 1069}, 'variants': {}}]}
In need of real life community in the space
9
I went down the AI rabbit hole not too long ago and I must say it’s been quite exciting and challenging. I don’t have programming experience, so a lot of things I have explored have been more from a vibe coding standpoint, and I know some of my previous posts have received some pokes due to that. Everyone brings a different lens and I’m not trying to reduce my inability to code. However, my biggest challenge is that in my circle of friends, I’m the most “advanced” and it sucks cos I know I don’t know a lot. I am using this post as a smoke signal to search for a mentor, peer or community that can help in this quest for knowledge and further understanding of this space. This sub is helpful, but it’s not the same as bouncing thoughts, ideas and all in real time. When I started out, I bought the domain - https://www.mindmeetsmodel.com with the goal of documenting my journey and being able to look back and point at what I was able to accomplish. The site was vibe coded by the way. I hope someone who is willing to help a stranger stumbled on this post.
2025-09-11T14:16:51
https://www.reddit.com/r/LocalLLaMA/comments/1ne9zn5/in_need_of_real_life_community_in_the_space/
SolidRemote8316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne9zn5
false
null
t3_1ne9zn5
/r/LocalLLaMA/comments/1ne9zn5/in_need_of_real_life_community_in_the_space/
false
false
self
9
null
Searching for local models to translate asian novels
3
Hello\~ Im currently trying to find LLMs that may be able to assist me in translating novels offline. I have tested out lots of models tbh and so far I have gotten the biggest achivement with "nous-hermes-2-yi-34b" as well as "yi-34b-chat". But it still feels a bit unpolished, especially the grammar which is why Im not entirely sure if maybe my parameters are maybe not ideally chosen or there may be better models in order to translate novels. My setup is the following: Ryzen 7 7800x3D RX 7900 XTX 128GB DDR5 RAM Im thinking of getting myself an nvidia graphics card when the next sale hits since I heard that it may work faster than an AMD GPU. [My Settings for LM Studio. Currently Im testing and playing around the temperature](https://preview.redd.it/ckqj802ajjof1.png?width=328&format=png&auto=webp&s=952cfddcea15be5375c959292daeca297fa35929) Would love to get advice in order to achieve my dream to have unlimited novels to read!
2025-09-11T13:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1ne9dxo/searching_for_local_models_to_translate_asian/
moeKyo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne9dxo
false
null
t3_1ne9dxo
/r/LocalLLaMA/comments/1ne9dxo/searching_for_local_models_to_translate_asian/
false
false
https://b.thumbs.redditm…0LrrrTk-5GbE.jpg
3
null
This is what a 48gb 4090 looks like
2
The heatsink's are solid bricks that would hurt your toes if you dropped it, weighing 2lb 9oz alone. LLM Performance metrics and comparisons (against A6000, A100, stock 4090 and 3090ti) to come.
2025-09-11T13:37:05
https://www.reddit.com/gallery/1ne91bv
BGAbandit
reddit.com
1970-01-01T00:00:00
0
{}
1ne91bv
false
null
t3_1ne91bv
/r/LocalLLaMA/comments/1ne91bv/this_is_what_a_48gb_4090_looks_like/
false
false
https://b.thumbs.redditm…UZoxUAvQqs1g.jpg
2
null
Mac Mini M4 vs. Mac Studio M1 Max
0
Hey everyone, I'm looking for some advice on my first local LLM setup. I've narrowed it down to two options, both available for a little under €1000, and I'm torn. I'm leaning towards these Mac models over an NVIDIA GPU setup primarily for low power consumption, as the machine will be running 24/7 as a media and LLM server. Here are the two options I'm weighing: 1. Brand New Mac mini with M4 chip: 32GB RAM / 256GB SSD 2. Used Mac Studio with M1 Max chip: 32GB RAM / 512GB SSD (in perfect condition) The main consideration for me is the trade-off between the newer M4 architecture's efficiency and the M1 Max's more powerful GPU/SoC. My use case is primarily for text generation, integrating with Home Assistant, running some Abliterated llm and code. I know 64GB of RAM would be ideal, but it adds 50-100% to the price, which is a dealbreaker. I'm hoping 32GB is more than enough for what I need, but please correct me if I'm wrong! Any thoughts or experiences would be hugely appreciated. I'm especially interested in which machine would be the better long-term investment for this specific workload, balancing performance with energy efficiency. Thanks in advance!
2025-09-11T13:08:27
https://www.reddit.com/r/LocalLLaMA/comments/1ne8dfq/mac_mini_m4_vs_mac_studio_m1_max/
SensiSharp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne8dfq
false
null
t3_1ne8dfq
/r/LocalLLaMA/comments/1ne8dfq/mac_mini_m4_vs_mac_studio_m1_max/
false
false
self
0
null
CPU-only inference with 4 vs 8 cores
6
Hi. I'm using a remote server for small-model inference (12B or so). Assume the server has 8 cores and 8GB RAM. This gives me an inference speed of more than 10 tokens per second (I don't know how to measure time to first toke, so this is overall). Now, I have a chance to "update" that server to another one with double the RAM, i.e. 16GB, _but half the cores_: 4 cores. Should I take that, as it allows running bigger models? Or the fewer cores will deteriorate my inference speed? Assume my target model architecture is Gemma 3, either 27b Q3, or 12b Q4. Thanks
2025-09-11T13:04:57
https://www.reddit.com/r/LocalLLaMA/comments/1ne8aic/cpuonly_inference_with_4_vs_8_cores/
ihatebeinganonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne8aic
false
null
t3_1ne8aic
/r/LocalLLaMA/comments/1ne8aic/cpuonly_inference_with_4_vs_8_cores/
false
false
self
6
null
Crowdsourced predictions before the end of round 7
0
There are simulations and then there is the wisdom of the masses. Please give your top 3 picks of players that will finish in the top 2 (in a comment of course). If there are enough comments then I will make a tally and publish the results. You can also add why, but please for easy data collection mention the names first.
2025-09-11T13:01:13
https://www.reddit.com/r/LocalLLaMA/comments/1ne87fh/crowdsourced_predictions_before_the_end_of_round_7/
pier4r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne87fh
false
null
t3_1ne87fh
/r/LocalLLaMA/comments/1ne87fh/crowdsourced_predictions_before_the_end_of_round_7/
false
false
self
0
null
Qwen3-Next-80B-A3B-Thinking soon
494
2025-09-11T12:49:27
https://i.redd.it/bo8hhc558jof1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1ne7y69
false
null
t3_1ne7y69
/r/LocalLLaMA/comments/1ne7y69/qwen3next80ba3bthinking_soon/
false
false
default
494
{'enabled': True, 'images': [{'id': 'bo8hhc558jof1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?width=108&crop=smart&auto=webp&s=e8190af63e7c2014d5389b6d8cea3e884f7eb0ec', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?width=216&crop=smart&auto=webp&s=2b102141ca5a38ca745c702a6e18e1220190422d', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?width=320&crop=smart&auto=webp&s=8b4d6b8dc3d8d70e343016e87b4a278537555304', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?width=640&crop=smart&auto=webp&s=547def56180e3c7f03468272c1979619111e065e', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?width=960&crop=smart&auto=webp&s=993f1564d1537eaba8da836f019c2109325e7c3b', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?width=1080&crop=smart&auto=webp&s=94dfc202332bb1cf64c9c38bb531ae10f49b7e04', 'width': 1080}], 'source': {'height': 620, 'url': 'https://preview.redd.it/bo8hhc558jof1.png?auto=webp&s=02e7879423d5a7dc2ad3f75845a7853c24830503', 'width': 1184}, 'variants': {}}]}
Collab for a project
1
I’m working on building an **LLM-based summarization agent for large code repositories**, exploring approaches such as hierarchical summarization, representing the repository as a tree structure, and using RAG (Retrieval-Augmented Generation). The idea is inspired by the CODERAG paper, the Lingma Agent paper, and prior work on hierarchical summarization. If you’re interested in collaborating on this, feel free to reach out or DM me
2025-09-11T12:44:55
https://www.reddit.com/r/LocalLLaMA/comments/1ne7ukn/collab_for_a_project/
Intrepid-Match-6296
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne7ukn
false
null
t3_1ne7ukn
/r/LocalLLaMA/comments/1ne7ukn/collab_for_a_project/
false
false
self
1
null
i want to train a tts model on indian languagues mainly (hinglish and tanglish)
4
which are the open source model available for this task ? please guide ?
2025-09-11T12:43:38
https://www.reddit.com/r/LocalLLaMA/comments/1ne7tg9/i_want_to_train_a_tts_model_on_indian_languagues/
atmanirbhar21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne7tg9
false
null
t3_1ne7tg9
/r/LocalLLaMA/comments/1ne7tg9/i_want_to_train_a_tts_model_on_indian_languagues/
false
false
self
4
null
Create 3D graphic images with a real person's face?
0
Hi, can someone suggest how best to do it. I have seen that it is very difficult to get the cartoon character to match a real person's face. Is there a way this is achievable? Thanks.
2025-09-11T12:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1ne7g4s/create_3d_graphic_images_with_a_real_persons_face/
Available-Violinist4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne7g4s
false
null
t3_1ne7g4s
/r/LocalLLaMA/comments/1ne7g4s/create_3d_graphic_images_with_a_real_persons_face/
false
false
self
0
null
Qwen Code CLI affected by the debug-js compromise
35
On 2025-09-08 the maintainer of some popular JS libraries was compromised, and new versions of some popular libraries were released with some crypto stealing code. qwen code cli was one of the programs that was updated since then, and windows defender will detect Malgent!MSR trojan in some JS libraries when you start qwen. The payload was for the browser environment of javascript, and I don't know if there is any impact if you run the compromised code in the node.js context. Still, I hope this gets cleaned up soon.
2025-09-11T12:25:10
https://www.reddit.com/r/LocalLLaMA/comments/1ne7f0c/qwen_code_cli_affected_by_the_debugjs_compromise/
mestar12345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne7f0c
false
null
t3_1ne7f0c
/r/LocalLLaMA/comments/1ne7f0c/qwen_code_cli_affected_by_the_debugjs_compromise/
false
false
self
35
null
Tricks from OpenAI gpt-oss YOU 🫵 can use with transformers
11
The Hugging Face transformers team seemingly wrote a blogpost on the recent upgrades of transformers, with the intention that the transformers code can be used as a reference for more efficient frameworks like llama.cpp and vLLM. Worth a read I think, e.g. I didn't know that you could load models the GPT OSS models with Flash Attention 3 already in transformers.
2025-09-11T12:18:20
https://huggingface.co/blog/faster-transformers
-Cubie-
huggingface.co
1970-01-01T00:00:00
0
{}
1ne79vy
false
null
t3_1ne79vy
/r/LocalLLaMA/comments/1ne79vy/tricks_from_openai_gptoss_you_can_use_with/
false
false
https://external-preview…14bc1466180c4316
11
{'enabled': False, 'images': [{'id': 'zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?width=108&crop=smart&auto=webp&s=e841a1df57d84df97fc981f6871fa0a8d16ee3b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?width=216&crop=smart&auto=webp&s=e8f992dbf6e24409bb7938562fe6c3a4ef7acd8b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?width=320&crop=smart&auto=webp&s=a2768b422267be670e5489ff296975f07c4c2d09', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?width=640&crop=smart&auto=webp&s=d61bd0410390e66f70a0fd8308c46836b36e92ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?width=960&crop=smart&auto=webp&s=eaf43fdb220904b5d89a3189d275c311a9a78f59', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?width=1080&crop=smart&auto=webp&s=debf9151aca109d2a796a5044446329714a7dc9e', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/zEY2MJK9CpvvP0JFHFtDvlPSyTxa52rusnRot1qbhGg.png?auto=webp&s=3933fb23666a517d51c747743338be9e9146788f', 'width': 1300}, 'variants': {}}]}
Viability of dual GPU RTX 5090 and RTX pro 6000 Max Q
5
Current build: Motherboard: ProArt x870e Creator WIFI PSU: Seasonic Titanium 1300W GPU: Rog Astral 5090 RAM: 192GB DDR5 6000MTS Purpose: AI video generation and running LLMs Current max wattage: 780W Idle: 100W Thinking of upgrading to dual GPU by purchasing a pro 6000 maxQ (300W) placing 5090 below and 6000 above. Both blackwell architectures, but becomes PCIe x8/x8. I would rather go for this path than change to a workstation which would be more costly, if possible. Is this build viable? What are the problems that I might encounter here? OR another option: wait for 5080 Super 24GB but combined VRAM would only be 56GB compared to 128GB. Comments and suggestions appreciated.
2025-09-11T12:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1ne74p4/viability_of_dual_gpu_rtx_5090_and_rtx_pro_6000/
Dry_Mortgage_4646
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne74p4
false
null
t3_1ne74p4
/r/LocalLLaMA/comments/1ne74p4/viability_of_dual_gpu_rtx_5090_and_rtx_pro_6000/
false
false
self
5
null
GPT-OSS:120B Benchmark on MacStudio M3 Ultra 512GB
0
When life permits, I've been trying to provide benchmarks for running local (private) LLMs on a Mac Studio M3 Ultra. I've also been looking for ways to make them a little more fun without being intrusively so. The benchmark isn’t scientific; there are plenty of those. I wanted something that would let me see how it performs at specific lengths.
2025-09-11T12:05:22
https://www.youtube.com/watch?v=HsKqIB93YaY
SlingingBits
youtube.com
1970-01-01T00:00:00
0
{}
1ne707j
false
{'oembed': {'author_name': 'Slinging Bits', 'author_url': 'https://www.youtube.com/@SlingingBits', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HsKqIB93YaY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="GPT-OSS:120B Benchmark on MacStudio M3 Ultra 512GB"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/HsKqIB93YaY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'GPT-OSS:120B Benchmark on MacStudio M3 Ultra 512GB', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1ne707j
/r/LocalLLaMA/comments/1ne707j/gptoss120b_benchmark_on_macstudio_m3_ultra_512gb/
false
false
default
0
{'enabled': False, 'images': [{'id': 'ZG4bdltfqAzFGjN9F-FfgE2zk7kRQJfgDvCIprY4uH8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZG4bdltfqAzFGjN9F-FfgE2zk7kRQJfgDvCIprY4uH8.jpeg?width=108&crop=smart&auto=webp&s=90f51de735ff184063c9fe58288187da0d357286', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZG4bdltfqAzFGjN9F-FfgE2zk7kRQJfgDvCIprY4uH8.jpeg?width=216&crop=smart&auto=webp&s=49f9d80e429259bdc65f956a44202ea7f6081e30', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZG4bdltfqAzFGjN9F-FfgE2zk7kRQJfgDvCIprY4uH8.jpeg?width=320&crop=smart&auto=webp&s=4c622653eb980d127817be149754a29502d3c287', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ZG4bdltfqAzFGjN9F-FfgE2zk7kRQJfgDvCIprY4uH8.jpeg?auto=webp&s=ab23235fecbcb4bfad73fdf02ebf5d878445bd2d', 'width': 480}, 'variants': {}}]}
TTS local install DeepSpeed problem
1
Anyone knows on how to actually install DeepSpeed when installing indextts on Windows? This is the instructions per their github page - >**Available Extra Features:** >`--extra deepspeed`: Adds DeepSpeed support (may speed up inference on some systems). >Important >**Important (Windows):** The DeepSpeed library may be difficult to install for some Windows users. You can skip it by removing the `--all-extras` flag. If you want any of the other extra features above, you can manually add their specific feature flags instead. >**Important (Linux/Windows):** If you see an error about CUDA during the installation, please ensure that you have installed NVIDIA's [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) version **12.8** (or newer) on your system. I need DeepSpeed but multiple errors everything. I have 12.8 cuda and all necessary things but still. Anyone knows workaround?
2025-09-11T12:01:21
https://www.reddit.com/r/LocalLLaMA/comments/1ne6x4r/tts_local_install_deepspeed_problem/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne6x4r
false
null
t3_1ne6x4r
/r/LocalLLaMA/comments/1ne6x4r/tts_local_install_deepspeed_problem/
false
false
self
1
null
My goal was to make local AI "plug-and-play" for everyone. So I launched PLYIA, a suite of simple GUIs (Win/Linux) that require zero coding.
3
>
2025-09-11T11:49:44
https://i.redd.it/zx6qn5k1wiof1.jpeg
plyia
i.redd.it
1970-01-01T00:00:00
0
{}
1ne6opg
false
null
t3_1ne6opg
/r/LocalLLaMA/comments/1ne6opg/my_goal_was_to_make_local_ai_plugandplay_for/
false
false
default
3
{'enabled': True, 'images': [{'id': 'zx6qn5k1wiof1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?width=108&crop=smart&auto=webp&s=58c03d1ea0909c0560fb6fd167a293727e980a5c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?width=216&crop=smart&auto=webp&s=0ce0c3dd124d67db3477308d330b5285c41aa32e', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?width=320&crop=smart&auto=webp&s=7c835088b12987fcd550e4cab4da7a2a1dd815a6', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?width=640&crop=smart&auto=webp&s=7e3dd66e17698f32eec2c58c3e0d9b5ebbd7013d', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?width=960&crop=smart&auto=webp&s=f4ff9476349b0aa5b00a2d399ab2bd5e39d0c8b5', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?width=1080&crop=smart&auto=webp&s=6a3c2ffad28711ceae206733b4b65a1314b86fe9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/zx6qn5k1wiof1.jpeg?auto=webp&s=c09924f47011adf7e995688225bfa68a8c8f1a0c', 'width': 1920}, 'variants': {}}]}
Recommendations needed for tree removal
0
I have an oak tree in my backyard and want your suggestions on what to do with it. I am planning get it removed as it is damaging the fence and approaching the deck and patio making it vulnerable to damage. please see pictures.
2025-09-11T11:04:46
https://www.reddit.com/r/LocalLLaMA/comments/1ne5uan/recommendations_needed_for_tree_removal/
Status_General_9644
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne5uan
false
null
t3_1ne5uan
/r/LocalLLaMA/comments/1ne5uan/recommendations_needed_for_tree_removal/
false
false
self
0
null