title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Struggling with OpenRouter sessions, tried something different | 0 | Been running some experiments with LLaMA models through OpenRouter, and honestly, the stateless setup is kind of brutal. Having to resend everything with each call makes sense from a routing perspective, but as a dev, it creates a ton of overhead. I’ve already hacked together a small memory layer just to keep context, and it still feels clunky.
Out of curiosity, I tried Backboard.io. It says “waitlist-only,” but I got in fast, so maybe they’re onboarding quietly. What stood out is the stateful sessions, it actually remembers context without me having to do all the duct-tape logic. Makes iterating with local models much smoother since I can focus on the interaction rather than rebuilding memory every time.
Has anyone else here looked into alternatives, or are you just sticking with OpenRouter + your own memory patchwork?
| 2025-09-05T15:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n98e0q/struggling_with_openrouter_sessions_tried/ | Any-Marionberry4035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n98e0q | false | null | t3_1n98e0q | /r/LocalLLaMA/comments/1n98e0q/struggling_with_openrouter_sessions_tried/ | false | false | self | 0 | null |
Seems new model qwen 3 max preview is already available on qwen chat | 54 | 2025-09-05T15:28:15 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n98c25 | false | null | t3_1n98c25 | /r/LocalLLaMA/comments/1n98c25/seems_new_model_qwen_3_max_preview_is_already/ | false | false | default | 54 | {'enabled': True, 'images': [{'id': 'nzfh1xg27dnf1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?width=108&crop=smart&auto=webp&s=36d6529eb2513db84f60f21f7d3ed6a913365bfd', 'width': 108}, {'height': 226, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?width=216&crop=smart&auto=webp&s=768698215b691ac2ef8d89630739c1d852e95a17', 'width': 216}, {'height': 334, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?width=320&crop=smart&auto=webp&s=c96f953f08dbe5a9a96642c49d72aed5ba672e61', 'width': 320}, {'height': 669, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?width=640&crop=smart&auto=webp&s=73f4d1143429da1ba0af0e95c543b1866fd87af5', 'width': 640}, {'height': 1004, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?width=960&crop=smart&auto=webp&s=ee944ae9a897d90da4d66b002c85f72270b75547', 'width': 960}, {'height': 1130, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?width=1080&crop=smart&auto=webp&s=226fdae0ec3cc9e7768be0d1b43e342f62b9c5ee', 'width': 1080}], 'source': {'height': 1130, 'url': 'https://preview.redd.it/nzfh1xg27dnf1.png?auto=webp&s=d2fc5ad8881508473fe071664098c41ac1b2ca2a', 'width': 1080}, 'variants': {}}]} | ||
Vision models for signatures | 2 | been testing Gemma, Lava and Qwen to see how well they detect signatures in an image but results have been very inconsistent - any recommendations for vision models for this purpose ?
| 2025-09-05T15:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n9864w/vision_models_for_signatures/ | 2BucChuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9864w | false | null | t3_1n9864w | /r/LocalLLaMA/comments/1n9864w/vision_models_for_signatures/ | false | false | self | 2 | null |
Kwai-Klear/Klear-46B-A2.5B-Instruct: Sparse-MoE LLM (46B total / only 2.5B active) | 95 | 2025-09-05T15:16:49 | https://huggingface.co/Kwai-Klear/Klear-46B-A2.5B-Instruct | paf1138 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n981di | false | null | t3_1n981di | /r/LocalLLaMA/comments/1n981di/kwaiklearklear46ba25binstruct_sparsemoe_llm_46b/ | false | false | default | 95 | {'enabled': False, 'images': [{'id': 'YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?width=108&crop=smart&auto=webp&s=40904243c9783ee9ffeb083457ecb6eb0a2d63ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?width=216&crop=smart&auto=webp&s=165c48dcc471468f26d99a3fa50142dac793441a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?width=320&crop=smart&auto=webp&s=010d4d63f828a7f5649b0b576784e7f4252baf89', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?width=640&crop=smart&auto=webp&s=b95159fb1ff226ff1812f1b78e632fd38eaf6fc9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?width=960&crop=smart&auto=webp&s=4a62875741454c2da5537539222904ac0514f57d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?width=1080&crop=smart&auto=webp&s=aee934ee6a08c9ffe82febbf2574804dcf1db8f3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YCjYCLowWoUZPOtPjfRsNwF5BBEIscgMQg1iK3Ht-1Q.png?auto=webp&s=fc5cbb43a04e4b1027a25235a4d538566c27fa9e', 'width': 1200}, 'variants': {}}]} | |
Qwen3 Max just dropped on OpenRouter - hope they actually open-weight/source it this time 🙏 | 1 | Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the January 2025 version. It delivers higher accuracy in math, coding, logic, and science tasks, follows complex instructions in Chinese and English more reliably, reduces hallucinations, and produces higher-quality responses for open-ended Q&A, writing, and conversation. The model supports over 100 languages with stronger translation and commonsense reasoning, and is optimized for retrieval-augmented generation (RAG) and tool calling, though it does not include a dedicated “thinking” mode. | 2025-09-05T15:15:56 | notrdm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n980jp | false | null | t3_1n980jp | /r/LocalLLaMA/comments/1n980jp/qwen3_max_just_dropped_on_openrouter_hope_they/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '550e3q9d4dnf1', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?width=108&crop=smart&auto=webp&s=1e957bb76cea7fa3e8cf8695f784d076af717b15', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?width=216&crop=smart&auto=webp&s=def8af8f540787169318580856a6470f2aaa2fc3', 'width': 216}, {'height': 99, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?width=320&crop=smart&auto=webp&s=cad5ed213ecc2d97bee3f808dc351e4ea6b87401', 'width': 320}, {'height': 199, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?width=640&crop=smart&auto=webp&s=313925ef97e3142691f6e9e58f51da724aac229d', 'width': 640}, {'height': 299, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?width=960&crop=smart&auto=webp&s=3c183f2f670d321e13108044decc8aa4d8eb7bde', 'width': 960}, {'height': 336, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?width=1080&crop=smart&auto=webp&s=8e59ef95ca7c15479c587f1acd745c129391013c', 'width': 1080}], 'source': {'height': 394, 'url': 'https://preview.redd.it/550e3q9d4dnf1.png?auto=webp&s=268439cf10f64a802b1488afa66489cb49a0650b', 'width': 1263}, 'variants': {}}]} | |
Best model for speech to text Transcription for including filler words ? | 7 | Hey everyone, I want to perform speech-to-text transcription in which I have to include filler words like: um, ah, so etc. which highlight confidence. Is there any type of model which can help me? I tried WhisperX but the results are not favorable. This is very important for me as I'm writing a research paper. | 2025-09-05T15:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n97mov/best_model_for_speech_to_text_transcription_for/ | Similar-Camp9685 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n97mov | false | null | t3_1n97mov | /r/LocalLLaMA/comments/1n97mov/best_model_for_speech_to_text_transcription_for/ | false | false | self | 7 | null |
Best model for speech to text Transcription? | 1 | [removed] | 2025-09-05T14:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n97kd0/best_model_for_speech_to_text_transcription/ | Organic_Comparison83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n97kd0 | false | null | t3_1n97kd0 | /r/LocalLLaMA/comments/1n97kd0/best_model_for_speech_to_text_transcription/ | false | false | self | 1 | null |
Best model for speech to text Transcription? | 1 | [removed] | 2025-09-05T14:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n97jet/best_model_for_speech_to_text_transcription/ | Organic_Comparison83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n97jet | false | null | t3_1n97jet | /r/LocalLLaMA/comments/1n97jet/best_model_for_speech_to_text_transcription/ | false | false | self | 1 | null |
PC for local LLM inference/GenAI development | 2 | Hi to all.
I am planning to buy a PC for local LLM running and GenAI app development. I want it to be able to run 32B models (maybe 70B for some testing), and I wish to know what do you think about the following PC build. Any suggestions to improve performance and budget are welcome!
CPU: AMD Ryzen 7 9800X3D 4.7/5.2GHz 494,9€
Motherboard: GIGABYTE X870 AORUS ELITE WIF7 ICE 272€
RAM: Corsair Vengeance DDR5 6600MHz 64GB 2x32GB CL32 305,95€
Tower: Forgeon Arcanite ARGB Mesh Tower ATX White 109,99€
Liquid cooler: Tempest Liquid Cooler 360 Kit White 68,99€
Power supply: Corsair RM1200x SHIFT White Series 1200W 80 Plus Gold Modular 214,90€
Graphics card: MSI GeForce RTX 5090 VENTUS 3X OC 32GB GDDR7 Reflex 2 RTX AI DLSS4 2499€
Drive 1: Samsung 990 EVO Plus 1TB Disco SSD 7150MB/s NVME PCIe 5.0 x2 NVMe 2.0 NAND 78,99€
Drive 2: Samsung 990 EVO Plus 2TB Disco SSD 7250MB/S NVME PCIe 5.0 x2 NVMe 2.0 NAND 127,99€ | 2025-09-05T14:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n97gvf/pc_for_local_llm_inferencegenai_development/ | JMarinG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n97gvf | false | null | t3_1n97gvf | /r/LocalLLaMA/comments/1n97gvf/pc_for_local_llm_inferencegenai_development/ | false | false | self | 2 | null |
Why is Gemini not able to navigate google and github ? | 0 | [https://g.co/gemini/share/df2836e5f352](https://g.co/gemini/share/df2836e5f352)
This is pro by the way | 2025-09-05T14:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/1n97br5/why_is_gemini_not_able_to_navigate_google_and/ | Superb_Intention2783 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n97br5 | false | null | t3_1n97br5 | /r/LocalLLaMA/comments/1n97br5/why_is_gemini_not_able_to_navigate_google_and/ | false | false | self | 0 | null |
Qwen 3 max | 461 | It's out
[https://openrouter.ai/qwen/qwen3-max](https://openrouter.ai/qwen/qwen3-max)
| 2025-09-05T14:42:28 | https://www.reddit.com/r/LocalLLaMA/comments/1n975er/qwen_3_max/ | LeatherRub7248 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n975er | false | null | t3_1n975er | /r/LocalLLaMA/comments/1n975er/qwen_3_max/ | false | false | self | 461 | {'enabled': False, 'images': [{'id': '9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?width=108&crop=smart&auto=webp&s=76a9004950afd1af7949a07c52b71de14c0aaa71', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?width=216&crop=smart&auto=webp&s=0921b178ad66a2bfee66cb9eb8da0807a1abd068', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?width=320&crop=smart&auto=webp&s=9e14a0946da9919bc9f87dac6bf7eb39edb5c35a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?width=640&crop=smart&auto=webp&s=1f89b1589b444db5310feea66a3e0335c0591fac', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?width=960&crop=smart&auto=webp&s=b0acc5f2e430c71520ec6dceadb61b9595049e25', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?width=1080&crop=smart&auto=webp&s=8fe32e31d537e1e970985f1de285c1c62541dd08', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/9f9JRaQTq2uR5GC3copbxq5McLsZhYSzNHSbhHCgcmg.png?auto=webp&s=4dbf0557d063fe8ea64c0d772f3f879679b5bf64', 'width': 1200}, 'variants': {}}]} |
Qwen3-Max is this coming to HuggingFace??? | 2 | [Qwen3-Max on openrouter](https://preview.redd.it/5e3775kaycnf1.png?width=2434&format=png&auto=webp&s=ea0812c009525c431cf3890a118a61aecf1cbe69)
Just saw this on Open Router. Is this going to be an API only release??? | 2025-09-05T14:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n972vp/qwen3max_is_this_coming_to_huggingface/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n972vp | false | null | t3_1n972vp | /r/LocalLLaMA/comments/1n972vp/qwen3max_is_this_coming_to_huggingface/ | false | false | 2 | null | |
Qwen3-Max is this coming to HuggingFace??? | 1 | 2025-09-05T14:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n96zr6/qwen3max_is_this_coming_to_huggingface/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n96zr6 | false | null | t3_1n96zr6 | /r/LocalLLaMA/comments/1n96zr6/qwen3max_is_this_coming_to_huggingface/ | false | false | 1 | null | ||
fine-tune Gemma 3 270M | 3 | Hi guys,
I’ve been watching quite a few tutorials on how to fine-tune Gemma 3 270M, but I’m still struggling to fully understand the process.
First of all, regarding the dataset for training: I know how to upload the dataset file to Colab, but I don’t quite understand how to set up the script so that it doesn’t actually train (for example, like in those chess dataset tutorials).
After that, I’d like to know: once the training is complete, how can I download the trained model? Where exactly is it stored?
And one last thing: after downloading the file — which I believe ends up being a GGUF — how can I integrate it into Ollama and make it work with Gemma 3 270M?
I know my explanation might sound a bit confusing, but that’s because I’m still trying to wrap my head around the whole process and may not be expressing it perfectly yet.
Could any of you help me out with this? Thanks a lot in advance! | 2025-09-05T14:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n96w3r/finetune_gemma_3_270m/ | Mundane_Cell8608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n96w3r | false | null | t3_1n96w3r | /r/LocalLLaMA/comments/1n96w3r/finetune_gemma_3_270m/ | false | false | self | 3 | null |
If you have openrouter i'd like your attention in an experiment | 0 | So i know this may sound like promotion but it's really an open source project with a new idea and it sucks if people dont give it a try because its a new spin.
so i built a openrouter frontend but its not like any other. you can build custom "sunes" which are kinda like custom GPTs but with a new twist
You are able to add CUSTOM HTML to them (including <script> tags) and a custom sune could be ANYthing like for example i built a image format converter that was a sune. There is a custom sune that even helps you build sunes if you give it the whole context of the codebase, and i made it easy by having another custom sune which can fetch code from github and insert it into the chat. There is also an android APK because i built this thing totally from my phone, so it allows me to do development from mobile. I now do everything from it like i have have a sune which embeds an editor and lets me commit to github. Please check out the github and screenshots, id love for some people to share their custom sune via email or github issues.
https://github.com/multipleof4/sune | 2025-09-05T14:28:16 | https://www.reddit.com/r/LocalLLaMA/comments/1n96s50/if_you_have_openrouter_id_like_your_attention_in/ | Round_Ad_5832 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n96s50 | false | null | t3_1n96s50 | /r/LocalLLaMA/comments/1n96s50/if_you_have_openrouter_id_like_your_attention_in/ | false | false | self | 0 | null |
Qwen3 latest and most powerful language model | 7 | I have used their language model where I thought I would use the 235B model | 2025-09-05T14:26:39 | darkpigvirus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n96qo2 | false | null | t3_1n96qo2 | /r/LocalLLaMA/comments/1n96qo2/qwen3_latest_and_most_powerful_language_model/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'i7xrtvq0wcnf1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?width=108&crop=smart&auto=webp&s=c39eba84ec5b75235f1a256d40272ac0d1cbede5', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?width=216&crop=smart&auto=webp&s=6545e89e4397573675f51b11cfc44d6ae01ac92f', 'width': 216}, {'height': 101, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?width=320&crop=smart&auto=webp&s=e6e035d6b7242276cf8d98c30d663c4feab4da1f', 'width': 320}, {'height': 203, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?width=640&crop=smart&auto=webp&s=871f82fa38be95c88ab12751259f94354138d04c', 'width': 640}, {'height': 305, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?width=960&crop=smart&auto=webp&s=8a488d6a8508251380d7e31f771a462f740c90b4', 'width': 960}, {'height': 343, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?width=1080&crop=smart&auto=webp&s=44119e0eca9917a569bfc146bc5eca86192c0526', 'width': 1080}], 'source': {'height': 527, 'url': 'https://preview.redd.it/i7xrtvq0wcnf1.png?auto=webp&s=7e4ca7c085fa08e1bc3057bdd462c4219d6071ce', 'width': 1656}, 'variants': {}}]} | |
Anyone using Cline/Aider/similar coding agents as components in larger agentic workflows? | 2 | I'm curious if anyone has experimented with using cline or other coding agents with local models within larger, more complex agentic systems rather than just standalone tools.
For example, imagine a workflow where:
* Agent A does some analysis and determines code needs to be written
* Agent A hands off to Cline/Aider to actually implement, test and maybe deploy the code
* Agent A gets the results back and continues with the next steps (using the generated code)
Or even more complex scenarios where you might have multiple specialized coding agents (one for frontend, one for backend, etc.) all coordinated by a higher-level orchestrator. Is there a model or tool that are good for coding agent as api? | 2025-09-05T14:23:35 | https://www.reddit.com/r/LocalLLaMA/comments/1n96nt1/anyone_using_clineaidersimilar_coding_agents_as/ | Watchguyraffle1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n96nt1 | false | null | t3_1n96nt1 | /r/LocalLLaMA/comments/1n96nt1/anyone_using_clineaidersimilar_coding_agents_as/ | false | false | self | 2 | null |
Why you didn't use Optane for running LLMs locally? | 0 | 2025-09-05T13:49:03 | https://www.reddit.com/gallery/1n95scl | ImportantOwl2939 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n95scl | false | null | t3_1n95scl | /r/LocalLLaMA/comments/1n95scl/why_you_didnt_use_optane_for_running_llms_locally/ | false | false | 0 | null | ||
Best way to use Virtual try on in NanoBanana? | 0 | I tried virtual try on by creating a image like this below so it will be precise at its location:
https://preview.redd.it/znyoiya6ocnf1.png?width=304&format=png&auto=webp&s=757acc7f8e77cb90eb79b1fb4ce7c0d059fe9bf2
Result on ChatGPT:
[It did a pretty good job with the dress fit but failed to preserve the rest](https://preview.redd.it/zmox3zpaocnf1.png?width=390&format=png&auto=webp&s=ef871a2f461233a0c916a0ae6ddedd452aaac75e)
When i tried to do the same in google nano banana it fails sometimes (replaces only half of the outfit).
is there is better way to use try-on in nano banana? thx | 2025-09-05T13:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n95pdk/best_way_to_use_virtual_try_on_in_nanobanana/ | Short-Reaction7195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n95pdk | false | null | t3_1n95pdk | /r/LocalLLaMA/comments/1n95pdk/best_way_to_use_virtual_try_on_in_nanobanana/ | false | false | 0 | null | |
Unsloth just released their GGUF of Kimi-K2-Instruct-0905! | 154 | 2025-09-05T13:34:21 | https://huggingface.co/unsloth/Kimi-K2-Instruct-0905-GGUF | TheAndyGeorge | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n95fl4 | false | null | t3_1n95fl4 | /r/LocalLLaMA/comments/1n95fl4/unsloth_just_released_their_gguf_of/ | false | false | 154 | {'enabled': False, 'images': [{'id': 'u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?width=108&crop=smart&auto=webp&s=f1632573a388c6a4bb8804f6c3a6417f6a0448c3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?width=216&crop=smart&auto=webp&s=bc96d2c92bdf08fa856a1cfe81eecc2d5d746d2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?width=320&crop=smart&auto=webp&s=4b6bfaa86dfa962e0d5e6aa0b2d0d2166d5314e0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?width=640&crop=smart&auto=webp&s=f7d4e7a1b9c3b96563747fc8517c620156b1d622', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?width=960&crop=smart&auto=webp&s=4c4fa5e75c298f31c36bf41eed43627332d98fe3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?width=1080&crop=smart&auto=webp&s=117603b10edccdfe9d9a4c66f796d37cd6f676ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/u42y4pGiiWpLArGTxtLnpU7XIOrkkmzZ5xAid1ozch8.png?auto=webp&s=c0104607347b21f437d688de9b55ad533fdf5060', 'width': 1200}, 'variants': {}}]} | ||
Looking for SME practitioners for a 45–60 min expert interview (Master’s thesis on selecting & implementing LLMs in SMEs) | 2 | Hi everyone! I’m **Eric Lohr**, a Master’s student in Economics at **Leibniz University Hannover**.
For my thesis, I’m researching:
**How small and medium-sized enterprises (SMEs) select and introduce Large Language Models (LLMs) into their business processes - with the goal of building a practical implementation framework for SMEs.**
I’m looking to interview **practitioners** who have evaluated or rolled out LLMs (e.g., ChatGPT/ChatGPT Enterprise, Microsoft 365 Copilot, Azure OpenAI, Claude, Mistral, etc.) in an SME context (ideally **<250 employees**, but up to \~500 is fine).
**What we’ll talk about (high level):**
* Selection & evaluation (build/buy, vendor choice, data/security requirements)
* Pilot design → adoption → production rollout
* Change management, enablement, prompt guidelines
* Governance, compliance, and risk controls
* Metrics & ROI (what worked, what didn’t), lessons learned
**Logistics:**
* **45–60 min** video call (Zoom/Teams), scheduled at your convenience
* **Anonymized & confidential**; recording only with your consent
* You’ll receive a **summary of findings** after completion of my study
**If you’re interested:**
Please **DM me** with your **role**, **company size**, **industry/country**, and **1–2 lines** on your LLM use case(s). Happy to share a brief interview guide up front.
Thanks a lot for supporting academic research and helping create actionable guidance for SMEs! 🙌 | 2025-09-05T13:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1n959yr/looking_for_sme_practitioners_for_a_4560_min/ | prettymofukkalo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n959yr | false | null | t3_1n959yr | /r/LocalLLaMA/comments/1n959yr/looking_for_sme_practitioners_for_a_4560_min/ | false | false | self | 2 | null |
Inference optimizations on ROCM? | 11 | What kind of optimizations are you guys using for inference on ROCM either on VLLM or SGLANG?
For an 8B model (16bit) on a rented MI300X I'm getting 80tps and then throughput drops to 10tps when I run 5 concurrent connections. This is max model length of 20000 on vllm.
In general on the ROCM platform are there certain flags or environment variables that seem to work for you guys? I always feel like the docs are out of date. | 2025-09-05T13:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n955g6/inference_optimizations_on_rocm/ | smirkishere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n955g6 | false | null | t3_1n955g6 | /r/LocalLLaMA/comments/1n955g6/inference_optimizations_on_rocm/ | false | false | self | 11 | null |
EmbeddingGemma + SQLite-vec for fully offline RAG system | 11 | 2025-09-05T13:16:45 | https://github.com/philschmid/gemini-samples/blob/main/scripts/embeddinggemma-sqlite-ollama.py | philschmid | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n950lo | false | null | t3_1n950lo | /r/LocalLLaMA/comments/1n950lo/embeddinggemma_sqlitevec_for_fully_offline_rag/ | false | false | default | 11 | {'enabled': False, 'images': [{'id': 'pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?width=108&crop=smart&auto=webp&s=5d502422f23bfd0311084a8ea9326b10d061faf1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?width=216&crop=smart&auto=webp&s=8ad1ffbd5c637079a3a5946b8ed736d5e3dde46d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?width=320&crop=smart&auto=webp&s=cfdfa6a28ee230f844ee02fc585e7841052f66d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?width=640&crop=smart&auto=webp&s=73229a8e145b44703da25f1ed30f4ee98181eadf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?width=960&crop=smart&auto=webp&s=61485137d6235818186b8b0ac9191a6815859b3b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?width=1080&crop=smart&auto=webp&s=a7ee0299fcffef6810a4afc47a1f35e6fe51b05d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pLXWn-yhXGLivWj-Q4jqYQjRejtbMR60x9BaCmVNEGs.png?auto=webp&s=06b3d847365003c397ccf2bc4ac89bbb0fb27d39', 'width': 1200}, 'variants': {}}]} | |
Looking to buy a 2nd laptop | 4 | Hey I'm on a tight budget and looking to buy a laptop, will this laptop handle local llms:
HP EliteBook Workstation
Intel Core i7-14700HX (4.4GHz) processor, 5TB SSD, 8GB RAM (expandable to 32GB), and a NVIDIA GeForce RTX 4070M 6GB GPU | 2025-09-05T13:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n94qsm/looking_to_buy_a_2nd_laptop/ | SilverRegion9394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n94qsm | false | null | t3_1n94qsm | /r/LocalLLaMA/comments/1n94qsm/looking_to_buy_a_2nd_laptop/ | false | false | self | 4 | null |
Succeeded to build full-level backend application with "qwen3-235b-a22b" in AutoBE | 36 | https://github.com/samchon/autobe-example-todo-qwen3-235b-a22b
Although what I've built with `qwen3-235b-a22b` (2507) is just a simple backend application composed of 10 API functions and 37 DTO schemas, this marks the first time I've successfully generated a full-level backend application without any compilation errors.
I'm continuously testing larger backend applications while enhancing AutoBE (an open-source project for building full-level backend applications using AI-friendly compilers) system prompts and its AI-friendly compilers. I believe it may be possible to generate more complex backend applications like a Reddit-style community (with around 200 API functions) by next month.
> I also tried the `qwen3-30b-a3b` model, but it struggles with defining DTO types. However, one amazing thing is that its requirement analysis report and database design were quite professional. Since it's a smaller model, I won't invest much effort in it, but I was surprised by the quality of its requirements definition and DB design.
Currently, AutoBE requires about 150 million tokens using `gpt-4.1` to create an Amazon like shopping mall-level backend application, which is very expensive (approximately $450). In addition to RAG tuning, using local LLM models like `qwen3-235b-a22b` could be a viable alternative.
The results from `qwen3-235b-a22b` were so interesting and promising that our AutoBE hackathon, originally planned to support only `gpt-4.1` and `gpt-4.1-mini`, urgently added the `qwen3-235b-a22b` model to the contest. If you're interested in building full-level backend applications with AI and local LLMs like qwen3, we'd love to have you join our hackathon and share this exciting experience.
We will test as many local LLMs as possible with AutoBE and report our findings to this channel whenever we discover promising results. Furthermore, whenever we find a model that excels at backend coding, we will regularly host hackathons to share experiences and collect diverse case studies.
- Hackathon Contest: https://autobe.dev/docs/hackathon/
- Github Repository: https://github.com/wrtnlabs/autobe | 2025-09-05T13:00:38 | jhnam88 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n94n2x | false | null | t3_1n94n2x | /r/LocalLLaMA/comments/1n94n2x/succeeded_to_build_fulllevel_backend_application/ | false | false | default | 36 | {'enabled': True, 'images': [{'id': 'bya05sjkgcnf1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/bya05sjkgcnf1.png?width=108&crop=smart&auto=webp&s=7aca0966fee1bdf64fc65f3a161ff55dddc03a2a', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/bya05sjkgcnf1.png?width=216&crop=smart&auto=webp&s=be677e69125c0da3015a2e38e87d9298df41a7bf', 'width': 216}, {'height': 346, 'url': 'https://preview.redd.it/bya05sjkgcnf1.png?width=320&crop=smart&auto=webp&s=cec55a8295cb6e75a23643f4026e5d53356c9c75', 'width': 320}], 'source': {'height': 652, 'url': 'https://preview.redd.it/bya05sjkgcnf1.png?auto=webp&s=9f7ad8aa1d83bf1bb08747872cf51c13659a5b95', 'width': 603}, 'variants': {}}]} | |
AISlop | General AI Agent with small models | 1 | Hi :D
Built a small C# console app called **AI Slop** – it’s an AI agent that manages your local file system using natural language. Inspired by the project "Manus AI"
It runs fully local with **Ollama** and works well with models like **qwen3-coder**.
* Natural language → file + folder operations (create, read, modify, navigate, etc.)
* Transparent “thought process” before each action
* Extensible C# toolset for adding new capabilities
* Uses a simple think → act → feedback loop
Example:
Task: create a project folder "hello-world" with app.py that prints "Hello from AI Slop!"
Agent will reason through, create the folder, navigate, and build the file and even test it if asked to.
The Agent and app is still in development, but I could make a good example with a small model like qwen3-4b
Repo: [cride9/AISlop](https://github.com/cride9/AISlop)
Example workflow + output: [EXAMPLE\_OUTPUT.md](https://github.com/cride9/AISlop/blob/master/EXAMPLE_OUTPUT.md) [EXAMPLE\_WORKFLOW.md](https://github.com/cride9/AISlop/blob/master/EXAMPLE_WORKFLOW.md)
**Examples are made with the model:** "*qwen3:4b-instruct-2507-q8\_0*" with ollama using 32k context | 2025-09-05T12:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n94lqx/aislop_general_ai_agent_with_small_models/ | cride20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n94lqx | false | null | t3_1n94lqx | /r/LocalLLaMA/comments/1n94lqx/aislop_general_ai_agent_with_small_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?width=108&crop=smart&auto=webp&s=2c19e90d31d684a2061b01982cc88b0eb76bb453', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?width=216&crop=smart&auto=webp&s=677c64bdfe54614e5edcbd9c9e6866ee63a647ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?width=320&crop=smart&auto=webp&s=e25b15d672dcb67c101989cc2c052d154e3de75a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?width=640&crop=smart&auto=webp&s=5329d0e4a793c21b9fa5f2f3c23175fff938fbb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?width=960&crop=smart&auto=webp&s=284c4e2e163d6fb8bb2fc05a92cd614792e12d75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?width=1080&crop=smart&auto=webp&s=485f992b1a950dcdde1073ba9ac072f269331f6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nEq1XnxzzcM6jsr2eRXiknLJVHcRjfKM-5vtI6sI4M0.png?auto=webp&s=d9a48ac16fb77c5e364433ae2299b2cecd9b2a51', 'width': 1200}, 'variants': {}}]} |
Current SOTA Text to Text LLM? | 5 | What is the best Model I can run on my 4090 for non coding tasks. What models in quants can you recommend for 24GB VRAM? | 2025-09-05T12:57:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n94ke8/current_sota_text_to_text_llm/ | 1GewinnerTwitch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n94ke8 | false | null | t3_1n94ke8 | /r/LocalLLaMA/comments/1n94ke8/current_sota_text_to_text_llm/ | false | false | self | 5 | null |
Succeeded to generate full-levevl backend application with "qwen3-235b-a22b" (in AutoBE) | 1 | https://github.com/samchon/autobe-example-todo-qwen3-235b-a22b
Although what I've built with `qwen3-235b-a22b` (2507) is just a simple backend application composed of 10 API functions and 37 DTO schemas, this marks the first time I've successfully generated a full-level backend application without any compilation errors.
I'm continuously testing larger backend applications while enhancing AutoBE (an open-source project for building full-level backend applications using AI-friendly compilers) system prompts and its AI-friendly compilers. I believe it may be possible to generate more complex backend applications like a Reddit-style community (with around 200 API functions) by next month.
> I also tried the `qwen3-30b-a3b` model, but it struggles with defining DTO types. However, one amazing thing is that its requirement analysis report and database design were quite professional. Since it's a smaller model, I won't invest much effort in it, but I was surprised by the quality of its requirements definition and DB design.
Currently, AutoBE requires about 150 million tokens using `gpt-4.1` to create an Amazon like shopping mall-level backend application, which is very expensive (approximately $450). In addition to RAG tuning, using local LLM models like `qwen3-235b-a22b` could be a viable alternative.
The results from `qwen3-235b-a22b` were so interesting and promising that our AutoBE hackathon, originally planned to support only `gpt-4.1` and `gpt-4.1-mini`, urgently added the `qwen3-235b-a22b` model to the contest. If you're interested in building full-level backend applications with AI and local LLMs like qwen3, we'd love to have you join our hackathon and share this exciting experience.
We will test as many local LLMs as possible with AutoBE and report our findings to this channel whenever we discover promising results. Furthermore, whenever we find a model that excels at backend coding, we will regularly host hackathons to share experiences and collect diverse case studies.
- Hackathon Contest: https://autobe.dev/docs/hackathon/
- Github Repository: https://github.com/wrtnlabs/autobe | 2025-09-05T12:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n94kcv/succeeded_to_generate_fulllevevl_backend/ | jhnam88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n94kcv | false | null | t3_1n94kcv | /r/LocalLLaMA/comments/1n94kcv/succeeded_to_generate_fulllevevl_backend/ | false | false | self | 1 | null |
I've used both (mainly for data analysis, writing, and vibe coding games), and I must say Kimi 0905 is superior to claude 4 sonnet, and even opus in some ways as well. | 1 | Title | 2025-09-05T12:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n94f25/ive_used_both_mainly_for_data_analysis_writing/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n94f25 | false | null | t3_1n94f25 | /r/LocalLLaMA/comments/1n94f25/ive_used_both_mainly_for_data_analysis_writing/ | false | false | self | 1 | null |
Looking for ressoruces and team for AGI | 0 | hi guys , i am looking for ressources and other stuff related to AGI to learn and do implemention. | 2025-09-05T12:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n94ewb/looking_for_ressoruces_and_team_for_agi/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n94ewb | false | null | t3_1n94ewb | /r/LocalLLaMA/comments/1n94ewb/looking_for_ressoruces_and_team_for_agi/ | false | false | self | 0 | null |
The hypocrisy! | 1 | 2025-09-05T12:21:25 | Severin_Suveren | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n93rha | false | null | t3_1n93rha | /r/LocalLLaMA/comments/1n93rha/the_hypocrisy/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'npbtq0gp9cnf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/npbtq0gp9cnf1.png?width=108&crop=smart&auto=webp&s=219e52d58080ac5af9ac119988fb3bd0d639001f', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/npbtq0gp9cnf1.png?width=216&crop=smart&auto=webp&s=e022bc7e9ea282dd8a3d43e2c54f94e57bd7ba85', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/npbtq0gp9cnf1.png?width=320&crop=smart&auto=webp&s=286295cefbd8b2f0d33047ffa7c5d9d66207509d', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/npbtq0gp9cnf1.png?width=640&crop=smart&auto=webp&s=2d114f7b8ff04ea0400dcd26cd05d24e3fe1c2b9', 'width': 640}], 'source': {'height': 565, 'url': 'https://preview.redd.it/npbtq0gp9cnf1.png?auto=webp&s=b9c1a505f01535d4824a0554d2b9de1852677db1', 'width': 878}, 'variants': {}}]} | ||
Any good TTS and voice cloning right now? | 6 | Is there actaully any good tts and voice cloner that supports longer text at once?
Other than chatterbox, is there anything better? | 2025-09-05T12:10:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n93j12/any_good_tts_and_voice_cloning_right_now/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n93j12 | false | null | t3_1n93j12 | /r/LocalLLaMA/comments/1n93j12/any_good_tts_and_voice_cloning_right_now/ | false | false | self | 6 | null |
With this specs can I really able to get local LLM? if so, help me with something | 1 | I am planning to have an local LLM since the ChatGPT 5 was being cruelly forced on free user like me with very low limitation to indirectly kick me out. first this is my spec:
Processor 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (2.80 GHz)
Installed RAM 16.0 GB (15.8 GB usable)
System type 64-bit operating system, x64-based processor
No GPU
with this spec how much... B(? i guess since I am new to this local LLM) would be best fit for this. I could ask AI for this too but I want some real time info. | 2025-09-05T11:58:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n939sz/with_this_specs_can_i_really_able_to_get_local/ | Mohmedh_K_A | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n939sz | false | null | t3_1n939sz | /r/LocalLLaMA/comments/1n939sz/with_this_specs_can_i_really_able_to_get_local/ | false | false | self | 1 | null |
🤖 Free Study Tool with Notes, Flashcards & AI Chatbot using Llama | 1 | [removed] | 2025-09-05T11:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1n92xbu/free_study_tool_with_notes_flashcards_ai_chatbot/ | lambo_lover_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n92xbu | false | null | t3_1n92xbu | /r/LocalLLaMA/comments/1n92xbu/free_study_tool_with_notes_flashcards_ai_chatbot/ | false | false | self | 1 | null |
Advice for fine tuning of model to change two aspects of model, subtly? | 0 | How to change a subtle behavior of model by fine tuning?
Situation
A model I'm using keeps having two quirks, 1) it keeps providing citations when I pressed for it to quote (sources) and when it does start citing, it throws up hallucinated sources. 2) it keeps thinking that a concept is X when that concept is actually Y
Otherwise the model is perfect. Today after first fine tuning with 400 rows of data the model completely broken and became lowish IQ. The verbosity of the model became super brief as well to match the fine tune dataset.
Because I just need to shape the 2 small behaviors above, are there any advice for me?
Should I limit my dataset to even small and focus on these 2 points only and then lower the LR? | 2025-09-05T11:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n92vp3/advice_for_fine_tuning_of_model_to_change_two/ | rockybaby2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n92vp3 | false | null | t3_1n92vp3 | /r/LocalLLaMA/comments/1n92vp3/advice_for_fine_tuning_of_model_to_change_two/ | false | false | self | 0 | null |
LiquidGEMM: Seems interesting | 8 | **LiquidGEMM: Hardware-Efficient W4A8 GEMM Kernel for High-Performance LLM Serving**
[https://arxiv.org/abs/2509.01229](https://arxiv.org/abs/2509.01229)
| 2025-09-05T11:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n92ofw/liquidgemm_seems_interesting/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n92ofw | false | null | t3_1n92ofw | /r/LocalLLaMA/comments/1n92ofw/liquidgemm_seems_interesting/ | false | false | self | 8 | null |
List of open models released or updated this week on this sub, just in case you missed one. | 324 | A quick list of models updates and new releases mentioned in several posts during the week on LocalLLama. I wanted to include links to posts/models but it didn't go through.
* **Kimi K2-0905** – new release from Moonshot AI
* **Wayfarer 2 12B & Nova 70B** – open-sourced narrative roleplay models from AI Dungeon
* **EmbeddingGemma (300M)** – Google’s compact multilingual embedding model
* **Apertus** – new open multilingual LLM from ETH Zürich (40%+ non-English training data)
* **WEBGEN-4B** – web design generation model trained on 100k synthetic samples
* **Lille (130M)** – a truly open-source small language model (trained fully from
* **Hunyuan-MT-7B & Hunyuan-MT-Chimera-7B** – Tencent’s new translation & ensemble models
* **GPT-OSS-120B** – benchmarks updates
* **Beens-MiniMax (103M MoE)** – scratch-built, SFT + LoRA experiments | 2025-09-05T11:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/1n92jy2/list_of_open_models_released_or_updated_this_week/ | aifeed-fyi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n92jy2 | false | null | t3_1n92jy2 | /r/LocalLLaMA/comments/1n92jy2/list_of_open_models_released_or_updated_this_week/ | false | false | self | 324 | null |
I am making a deep research tool for myself, needing more advice | 6 | Hi guys,
As I mentioned in the title, I am making a deep research tool to produce paper like science paper.
I'm not sure is it suitable to post here but let me give it a try since everyone here have energy for AI related.
Instead of Langchain, I am using Semantic Kernel, and I can basically create a PDF file now.
I posted same content in C# corner, but I think people just don't care about it.
This is a recent research my tool has produced with request **"Comparison for Pgvetor search for embedded data like vector\_l2\_ops, vector\_cosine\_ops and vector\_ip\_ops" :** [Google drive link](https://drive.google.com/file/d/1-C6FqhsrHVdnwGOHQsP2Tiq4WItUtdQi/view?usp=drive_link)
The cost is surround $0.1 for embedding and LLM reasoning with GPT5-mini.
Currently the content is good from my point of view. Each section is not link very well, and the writing tone is not clear enough.
Please pretend that you are a reader or researcher, what do you expect to have for a deep research tool? | 2025-09-05T11:21:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n92jp0/i_am_making_a_deep_research_tool_for_myself/ | Vozer_bros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n92jp0 | false | null | t3_1n92jp0 | /r/LocalLLaMA/comments/1n92jp0/i_am_making_a_deep_research_tool_for_myself/ | false | false | self | 6 | null |
List of models released or updated this week on this subreddit, in case you missed any .. | 1 | [removed] | 2025-09-05T11:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n92hb9/list_of_models_released_or_updated_this_week_on/ | aifeed-fyi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n92hb9 | false | null | t3_1n92hb9 | /r/LocalLLaMA/comments/1n92hb9/list_of_models_released_or_updated_this_week_on/ | false | false | self | 1 | null |
Lawyers vs Python — building my own legal AI during divorce in Japan | 1 | [removed] | 2025-09-05T11:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n9262r/lawyers_vs_python_building_my_own_legal_ai_during/ | IntelligentHope9866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9262r | false | null | t3_1n9262r | /r/LocalLLaMA/comments/1n9262r/lawyers_vs_python_building_my_own_legal_ai_during/ | false | false | self | 1 | null |
[AutoBE Hackathon] AI chatbot generating Backend Applilcation with AI Compilers ($6,400 Prize Pool) | 2 | **Full details**: https://autobe.dev/docs/hackathon
Wrtn Technologies is hosting the 1st AutoBE Hackathon to answer one burning question: Can AI truly make backend applications?
## What is AutoBE?
https://github.com/wrtnlabs/autobe
AutoBE is an AI-powered no-code platform that generates complete backend applications through natural language conversations.
It follows a 5-stage waterfall process (Requirements → Database Schema → API Design → Test Code → Implementation) with real-time compiler validation at each step.
The result? Backend applications built with TypeScript, NestJS, and Prisma that actually compile and run. Here is the example backend applications generated by AutoBE:
1. Discussion Board: https://github.com/wrtnlabs/autobe-example-bbs
2. To Do List: https://github.com/wrtnlabs/autobe-example-todo
3. Reddit Community: https://github.com/wrtnlabs/autobe-example-reddit
4. E-Commerce: https://github.com/wrtnlabs/autobe-example-shopping
- Requirements Analysis: [Report](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/docs/analysis)
- Database Design: [Entity Relationship Diagram](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/docs/ERD.md) / [Prisma Schema](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/prisma/schema)
- API Design: [API Controllers](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/src/controllers) / [DTO Structures](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/src/api/structures)
- E2E Test Functions: [`test/features/api`](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/test/features/api)
- API Impelementations: [`src/providers`](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/src/providers)
- AI Review: [AI_REVIEW.md](https://github.com/wrtnlabs/autobe-example-shopping/tree/main/AI_REVIEW.md)
## Hackathon Details
- When: September 12-14, 2025 (64 hours)
- Prize Pool: $6,400 total
- Grand Prize: $2,000
- Excellence Award: $1,000
- Participation Prize: $50 for all qualified participants
- Participants: Limited to 70 people (first-come, first-served)
- Registration: Now through September 10, 2025
- Sign up: https://forms.gle/8meMGEgKHTiQTrCT7
## What You'll Do
Generate 2 backend applications using different AI models:
- `openai/gpt-4.1-mini`: Good for small-to-medium apps, but sometimes struggles with compilation errors
- `openai/gpt-4.1`: Achieves 100% build success rate for enterprise-grade applications
Bonus for LocalLLaMa enthusiasts: Try the optional `qwen3-235b-a22b` model to compare open-source vs commercial AI performance!
## Your Mission
- Use AutoBE to generate backend applications
- Write detailed technical reviews for each generated app
- Evaluate code quality, architecture, maintainability
- Share honest feedback about AI's capabilities and limitations
## Why Join?
- Free access to premium AI models (normally $300+ per app generation)
- Real impact: Your feedback will shape AutoBE's development
- Test the future: See if AI can truly match your backend skills
Network with other experienced backend developers
## Requirements
- 1+ years of backend development experience
- English proficiency (all interactions with AI are in English)
- Ability to evaluate code quality and architecture
## The Twist
This isn't about AI writing perfect code - it's about understanding where AI excels and where human expertise remains irreplaceable. We want your honest, professional evaluation of AI-generated production code.
Ready to test if AI can replace some of your job? Register now and help us build the future of backend development! | 2025-09-05T10:29:50 | jhnam88 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n91lep | false | null | t3_1n91lep | /r/LocalLLaMA/comments/1n91lep/autobe_hackathon_ai_chatbot_generating_backend/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'FcxN9PxqIbWQJdwVODhFtj09Gvy27a0z-0cP28KXCzI', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?width=108&crop=smart&auto=webp&s=60fd4189a0509cf2204d0a89c6abffbdf33977db', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?width=216&crop=smart&auto=webp&s=a16f37b5383ec54e6fb61652c89b5922d9781a11', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?width=320&crop=smart&auto=webp&s=b0625adc4fdad4f00b2824450270db3edc74f858', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?width=640&crop=smart&auto=webp&s=ab0d1e77c18e6673d87c4922d9b1660450867f64', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?width=960&crop=smart&auto=webp&s=14eea669a58aa0115644a4b0fa766a22ca27dcb5', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?width=1080&crop=smart&auto=webp&s=5ce4e0ff40ffbcf211c6afe9f13659fc460cf658', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/qo8r24enpbnf1.png?auto=webp&s=037a50544551f0d41e8455acc1f445fb8eecc5d2', 'width': 1080}, 'variants': {}}]} | ||
Testing World Knowledge; and What Reasoning Does To It (regarding airliners, specifically) | 56 | More info in top comment. | 2025-09-05T10:28:23 | airbus_a360_when | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n91kiu | false | null | t3_1n91kiu | /r/LocalLLaMA/comments/1n91kiu/testing_world_knowledge_and_what_reasoning_does/ | false | false | 56 | {'enabled': True, 'images': [{'id': 'J0QtabmSlRqyGxohRvrkO653VatAJrfPgCr-t9vIf30', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?width=108&crop=smart&auto=webp&s=9189c41efb5ebc7ff29ccf608b02873528521e28', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?width=216&crop=smart&auto=webp&s=934e1d7ddfcf5c330b9fb2e052c83efb59ba5d55', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?width=320&crop=smart&auto=webp&s=d8e517ccfdf6087b1b6378a3547e9f72bba7d7bb', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?width=640&crop=smart&auto=webp&s=f741dacac81b4d1cd00ec2cbc6dfe4b990652691', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?width=960&crop=smart&auto=webp&s=49d143150088180ca2cfdb47cdd39aee1cf10559', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?width=1080&crop=smart&auto=webp&s=f11aa498e732447b0f20d5c20f28fe1fb5d9d7bc', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/6h652ykfpbnf1.png?auto=webp&s=09681a8f3513b3508c15cf1292e1874e7022ae27', 'width': 1920}, 'variants': {}}]} | ||
[AutoBE Hackathon] AI chatbot generating Backend Applilcation with AI Compilers ($6,400 Prize Pool) | 1 | Wrtn Technologies is hosting the 1st AutoBE Hackathon to answer one burning question: Can AI truly make backend applications?
## What is AutoBE?
AutoBE is an AI-powered no-code platform that generates complete backend applications through natural language conversations.
It follows a 5-stage waterfall process (Requirements → Database Schema → API Design → Test Code → Implementation) with real-time compiler validation at each step.
The result? Backend applications built with TypeScript, NestJS, and Prisma that actually compile and run.
## Hackathon Details
- When: September 12-14, 2025 (64 hours)
- Prize Pool: $6,400 total
- Grand Prize: $2,000
- Excellence Award: $1,000
- Participation Prize: $50 for all qualified participants
- Participants: Limited to 70 people (first-come, first-served)
- Registration: Now through September 10, 2025
- Sign up: https://forms.gle/8meMGEgKHTiQTrCT7
## What You'll Do
Generate 2 backend applications using different AI models:
- `openai/gpt-4.1-mini`: Good for small-to-medium apps, but sometimes struggles with compilation errors
- `openai/gpt-4.1`: Achieves 100% build success rate for enterprise-grade applications
Bonus for LocalLLaMa enthusiasts: Try the optional `qwen3-235b-a22b` model to compare open-source vs commercial AI performance!
## Your Mission
- Use AutoBE to generate backend applications
- Write detailed technical reviews for each generated app
- Evaluate code quality, architecture, maintainability
- Share honest feedback about AI's capabilities and limitations
## Why Join?
- Free access to premium AI models (normally $300+ per app generation)
- Real impact: Your feedback will shape AutoBE's development
- Test the future: See if AI can truly match your backend skills
Network with other experienced backend developers
## Requirements
- 1+ years of backend development experience
- English proficiency (all interactions with AI are in English)
- Ability to evaluate code quality and architecture
## The Twist
This isn't about AI writing perfect code - it's about understanding where AI excels and where human expertise remains irreplaceable. We want your honest, professional evaluation of AI-generated production code.
Ready to test if AI can replace some of your job? Register now and help us build the future of backend development!
Full details: https://autobe.dev/docs/hackathon | 2025-09-05T10:22:01 | https://autobe.dev/docs/hackathon/ | jhnam88 | autobe.dev | 1970-01-01T00:00:00 | 0 | {} | 1n91gl3 | false | null | t3_1n91gl3 | /r/LocalLLaMA/comments/1n91gl3/autobe_hackathon_ai_chatbot_generating_backend/ | false | false | default | 1 | null |
This is not funny...this is simply 1000000% correct | 1,888 | 2025-09-05T10:17:47 | theundertakeer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n91dyg | false | null | t3_1n91dyg | /r/LocalLLaMA/comments/1n91dyg/this_is_not_funnythis_is_simply_1000000_correct/ | false | false | 1,888 | {'enabled': True, 'images': [{'id': 'cjYu20hrSUUAtXbHzE7O924h-Msg34HZIRU1eLK-dDA', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/i3ej77konbnf1.jpeg?width=108&crop=smart&auto=webp&s=2b5ba2a015a9d1236f28f118324b6ad150504ffd', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/i3ej77konbnf1.jpeg?width=216&crop=smart&auto=webp&s=eae5a210c5890a9299ddc75abe492602c53b661e', 'width': 216}, {'height': 373, 'url': 'https://preview.redd.it/i3ej77konbnf1.jpeg?width=320&crop=smart&auto=webp&s=2785a4734426b948b6f81e4e80fa917333d7f303', 'width': 320}, {'height': 746, 'url': 'https://preview.redd.it/i3ej77konbnf1.jpeg?width=640&crop=smart&auto=webp&s=f560fe73aec9b14c9c5ceec57df1580374cbc8fb', 'width': 640}], 'source': {'height': 1091, 'url': 'https://preview.redd.it/i3ej77konbnf1.jpeg?auto=webp&s=60e5c59389610730cac0b7c1ae8070dff7b68110', 'width': 935}, 'variants': {}}]} | |||
What is the best inference model you have tried at 64gb VRAM and 128gb VRAM? | 4 | I'm using the model to ingest and understand large amounts of technical data. I want it to make well reasoned decisions quickly.
I've been testing with 32gb VRAM up to this point, but I'm migrating to new servers and want to upgrade the model.
Eager to hear impressions from the community. | 2025-09-05T10:00:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n913iq/what_is_the_best_inference_model_you_have_tried/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n913iq | false | null | t3_1n913iq | /r/LocalLLaMA/comments/1n913iq/what_is_the_best_inference_model_you_have_tried/ | false | false | self | 4 | null |
Where is theBloke? | 95 | Haven’t seen any posts related to this legend in a while? Where is he, is he okay? | 2025-09-05T09:55:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n910t9/where_is_thebloke/ | holistic-engine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n910t9 | false | null | t3_1n910t9 | /r/LocalLLaMA/comments/1n910t9/where_is_thebloke/ | false | false | self | 95 | null |
why no more 50B or 70B open models? | 1 | [removed] | 2025-09-05T09:53:40 | https://www.reddit.com/r/LocalLLaMA/comments/1n90zlt/why_no_more_50b_or_70b_open_models/ | One_Archer_577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n90zlt | false | null | t3_1n90zlt | /r/LocalLLaMA/comments/1n90zlt/why_no_more_50b_or_70b_open_models/ | false | false | self | 1 | null |
please share displaymodeselector-tool for Linux | 0 | subj. I'm not going to beg for a "developer account" to get access to downloads. | 2025-09-05T09:49:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n90wyu/please_share_displaymodeselectortool_for_linux/ | MelodicRecognition7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n90wyu | false | null | t3_1n90wyu | /r/LocalLLaMA/comments/1n90wyu/please_share_displaymodeselectortool_for_linux/ | false | false | self | 0 | null |
I am working on a local transcription and summarization solution for our medical clinic | 4 | I am a medical doctor who has been using LLMs for writing medical reports (I delete PII beforehand), but I still feel uncomfortable providing sensitive information to closed-source models. Therefore, I have been working with local models for data security and control.
My boss asked me to develop a solution for our department. Here are the details of my current setup:
* **Server**: GPU server from a European hoster (first month free)
* **Specs**: 4 vCPUs, 26 GB RAM, 16 GB RTX A4000
* **Application**:
* Whisper Turbo for capturing audio from consultations and department meetings
* Gemma3:12b for summarization, using ollama as the inference engine
* **Models Tested**: gpt-oss 20b (very slow), Gemma3:27b (also slow). I got the fastest results with Gemma3:12b
If it’s successful, we aim to extend this service first to our department (10 doctors) and later to the clinic (up to 100 users, including secretaries and other doctors). My boss mentioned the possibility of extending it to our clinic chain, which has a total of 8 clinics.
The server costs about **$250 USD** per month, and there are other providers starting at **$350USD** per month with better GPUs, CPUs, and more RAM.
* What’s the best setup to handle 10 and later 100 users?
* Does it make sense to own the hardware, or is it more convenient to rent it?
* Have any of you faced challenges with similar setups? What solutions worked for you?
* I’ve read that vLLM is more performance focused. Does changing the engine provide better results?
Thanks for reading and your feedback!
Martin
P.S: ollama makes up 9.5GB of GPU and 60% Memory, Whisper 5.6GB and 27% Memory (based on nvtop info)
| 2025-09-05T09:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n90w6p/i_am_working_on_a_local_transcription_and/ | Glittering_Way_303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n90w6p | false | null | t3_1n90w6p | /r/LocalLLaMA/comments/1n90w6p/i_am_working_on_a_local_transcription_and/ | false | false | self | 4 | null |
VibeVoice API and integrated backend | 5 | VibeVoice API and integrated backend
This is a single Docker Image with VibeVoice packaged and ready to work, and an API layer to wire it in your application.
[https://hub.docker.com/r/eworkerinc/vibevoice](https://hub.docker.com/r/eworkerinc/vibevoice)
This image is the backend for E-Worker Soundstage (our UI implementation for VibeVoice), but it can be used by any other application.
The API is as simple as this:
cat > body.json <<'JSON'
{
"model": "vibevoice-1.5b",
"script": "Speaker 1: Hello there!\nSpeaker 2: Hi! Great to meet you.",
"speakers": [ { "voiceName": "Alice" }, { "voiceName": "Carter" } ],
"overrides": {
"guidance": { "inference_steps": 28, "cfg_scale": 4.5 }
}
}
JSON
JOB_ID=$(curl -s -X POST http://localhost:8745/v1/voice/jobs \
-H "Content-Type: application/json" -H "X-API-Key: $KEY" \
--data-binary @body.json | jq -r .job_id)
curl -s "http://localhost:8745/v1/voice/jobs/$JOB_ID/result" -H "X-API-Key: $KEY" \
| jq -r .audio_wav_base64 | base64 --decode > out.wav | 2025-09-05T09:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1n90o31/vibevoice_api_and_integrated_backend/ | Working-Magician-823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n90o31 | false | null | t3_1n90o31 | /r/LocalLLaMA/comments/1n90o31/vibevoice_api_and_integrated_backend/ | false | false | self | 5 | null |
Even Kimi k2 0905 can't solve 5.9-5.11=? | 0 | 2025-09-05T09:25:23 | https://ibb.co/DH9TYW6X | JeffreySons_90 | ibb.co | 1970-01-01T00:00:00 | 0 | {} | 1n90ja6 | false | null | t3_1n90ja6 | /r/LocalLLaMA/comments/1n90ja6/even_kimi_k2_0905_cant_solve_59511/ | false | false | default | 0 | null | |
Practical walkthrough: Exploring Environments Hub | 1 | 📝 [https://huggingface.co/blog/anakin87/environments-hub](https://huggingface.co/blog/anakin87/environments-hub)
Hey, have you checked out Prime Intellect's **Environments Hub**?
It's a space where people share RL environments: tasks you can use to train LLMs with RL (GRPO-style) or evaluate Agents.
Environments are software packages that define a task: data, harness and scoring rules.
Basically everything you need to run evaluations or train a model on a task.
\---
**I explored the Environments Hub and wrote a walkthrough**
* RL + LLMs basics
* Environments Hub navigation
* Evaluating models/Agents
* GRPO Training a tiny model on an alphabetical sort task
Take a look!
📝 [https://huggingface.co/blog/anakin87/environments-hub](https://huggingface.co/blog/anakin87/environments-hub) | 2025-09-05T09:07:47 | anakin_87 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n9098e | false | null | t3_1n9098e | /r/LocalLLaMA/comments/1n9098e/practical_walkthrough_exploring_environments_hub/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kH3Qea96VkdawkFil91rXbWG8tMTSwWMuJNgxnyK0Vs', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?width=108&crop=smart&auto=webp&s=c643e90893425e8709edb0231358094a53ed2f6a', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?width=216&crop=smart&auto=webp&s=a899734f9a6cfc74d0cdda08b6633f10b37889ac', 'width': 216}, {'height': 279, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?width=320&crop=smart&auto=webp&s=124f8052f3613473e28336584873dc0edc2bc4d8', 'width': 320}, {'height': 559, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?width=640&crop=smart&auto=webp&s=8d95e72c0cdbbeb20620f388aa66956f9cef8c1a', 'width': 640}, {'height': 838, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?width=960&crop=smart&auto=webp&s=46f7ad3048d35e2f92efebf4f5aa910cb47a0fa2', 'width': 960}, {'height': 943, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?width=1080&crop=smart&auto=webp&s=5e76a5c06fa90e485770861378157b0e05abd241', 'width': 1080}], 'source': {'height': 1462, 'url': 'https://preview.redd.it/5de3pxrnabnf1.png?auto=webp&s=3e9f3c14eead985f84c58165520e2e1eb7ef5887', 'width': 1673}, 'variants': {}}]} | ||
What do you call a rabbit hole when a bigger creature moves in | 0 | [Conscious φ Singularity - Living Lattice Animation | Claude | Claude](https://claude.ai/public/artifacts/f618f706-a9d8-4e98-bfd2-6cbbc158f179)
[Mapping 0 Data - An Architecture Forged Through Experience](https://m0d.ai/) | 2025-09-05T09:02:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n906eg/what_do_you_call_a_rabbit_hole_when_a_bigger/ | Jean_M_Naard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n906eg | false | null | t3_1n906eg | /r/LocalLLaMA/comments/1n906eg/what_do_you_call_a_rabbit_hole_when_a_bigger/ | false | false | self | 0 | null |
Title: Is Anthropic’s new restriction really about national security, or just protecting market share? | 0 | I’m confused by Anthropic’s latest blog post:
>
Is this really about national security, or is it also about corporate self-interest?
* A lot of models coming out of Chinese labs are open-source or released with open weights (DeepSeek-R1, Qwen series), which has clearly accelerated accessibility and democratization of AI. That makes me wonder if Anthropic’s move is less about “safety” and more about limiting potential competitors.
* On OpenRouter’s leaderboard, Qwen and DeepSeek are climbing fast, and I’ve seen posts about people experimenting with proxy layers to indirectly call third-party models from within Claude Code. Could this policy be a way for Anthropic to justify blocking that kind of access—protecting its market share and pricing power, especially in coding assistants?
Given Dario Amodei’s past comments on export controls and national security, and Anthropic’s recent consumer terms update (“users must now choose whether to allow training on their data; if they opt in, data may be retained for up to five years”), I can’t help but feel the company is drifting from its founding ethos. Under the banner of “safety and compliance,” it looks like they’re moving toward a more rigid and closed path.
Curious what others here think: do you see this primarily as a national security measure, or a competitive/economic strategy?
full post and pics: [https://x.com/LuozhuZhang/status/1963884496966889669](https://x.com/LuozhuZhang/status/1963884496966889669) | 2025-09-05T08:56:26 | LuozhuZhang | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n902s8 | false | null | t3_1n902s8 | /r/LocalLLaMA/comments/1n902s8/title_is_anthropics_new_restriction_really_about/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'z5l2dbv19bnf1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?width=108&crop=smart&auto=webp&s=a35c34bf97924f9d9eabf19eb20e748f8b5a21f7', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?width=216&crop=smart&auto=webp&s=5fbbf4e27a0db864bb759745125103d25154263b', 'width': 216}, {'height': 274, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?width=320&crop=smart&auto=webp&s=acf2d6a6e909e99efb2084382a4d3f7927a40fd7', 'width': 320}, {'height': 549, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?width=640&crop=smart&auto=webp&s=c42b0524e4b8c5116792a1ab0a643c0a1c46274b', 'width': 640}, {'height': 824, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?width=960&crop=smart&auto=webp&s=011f59089e5b14cc8815d8bc0ec0a2f6335b66be', 'width': 960}, {'height': 927, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?width=1080&crop=smart&auto=webp&s=026f7219aefc0d6c38a7001be80cc485dd3984a3', 'width': 1080}], 'source': {'height': 1072, 'url': 'https://preview.redd.it/z5l2dbv19bnf1.png?auto=webp&s=f588927773f706769abcdcf197367bab76a03604', 'width': 1248}, 'variants': {}}]} | |
Title: Is Anthropic’s new restriction really about national security, or just protecting market share? | 1 | I’m confused by Anthropic’s latest blog post:
>
Is this really about national security, or is it also about corporate self-interest?
* A lot of models coming out of Chinese labs are open-source or released with open weights (DeepSeek-R1, Qwen series), which has clearly accelerated accessibility and democratization of AI. That makes me wonder if Anthropic’s move is less about “safety” and more about limiting potential competitors.
* On OpenRouter’s leaderboard, Qwen and DeepSeek are climbing fast, and I’ve seen posts about people experimenting with proxy layers to indirectly call third-party models from within Claude Code. Could this policy be a way for Anthropic to justify blocking that kind of access—protecting its market share and pricing power, especially in coding assistants?
Given Dario Amodei’s past comments on export controls and national security, and Anthropic’s recent consumer terms update (“users must now choose whether to allow training on their data; if they opt in, data may be retained for up to five years”), I can’t help but feel the company is drifting from its founding ethos. Under the banner of “safety and compliance,” it looks like they’re moving toward a more rigid and closed path.
Curious what others here think: do you see this primarily as a national security measure, or a competitive/economic strategy? | 2025-09-05T08:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n9021j/title_is_anthropics_new_restriction_really_about/ | LuozhuZhang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9021j | false | null | t3_1n9021j | /r/LocalLLaMA/comments/1n9021j/title_is_anthropics_new_restriction_really_about/ | false | false | self | 1 | null |
Is Anthropic’s new restriction really about national security, or just protecting market share? | 1 | [deleted] | 2025-09-05T08:54:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1n901pp | false | null | t3_1n901pp | /r/LocalLLaMA/comments/1n901pp/is_anthropics_new_restriction_really_about/ | false | false | default | 1 | null | ||
VibeVoice RIP? Not with this Community!!! | 85 | **VibeVoice Large is back!** No thanks to Microsoft though, still silence on their end.
This is in response to u/Fabix84 post [here](https://www.reddit.com/r/LocalLLaMA/comments/1n7zk45/vibevoice_rip_what_do_you_think/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button), who has done great work on providing VibeVoice support for ComfyUI.
In an odd series of events, Microsoft pulled the repo and any trace of the Large VibeVoice models on all platforms. No comments, nothing. The 1.5B is now part of the official HF Transformer library, but Large (7B) is only available through various mirrors provided by the community.
Oddly enough, I only see a marginal difference between the two with the 1.5B being incredibly good for single and multi-speaker models. I have my space back up and going here if interested. I'll run it on an L4 until I can move it over to Modal for inference. The 120 time limit for ZeroGPU makes a bit unusable on voices over 1-2 minutes.
Microsoft specifically states in the model card that they did not clean the training audio which is why you get music artifacts. This can be pretty cool, but I found it's so unpredictable that it can cause artifacts or noise to persist throughout the entire generation. I've found your better off just adding a sound effect after generation so that you can control it. This model is really meant for long form multi-speaker conversation which I think it does well at. I did test some other various voices with mixed results.
For the difference in quality I would personally just use the 1.5B. I use my space to generate "conferences" to test other STT models with transcription and captions. I am excited for the pending streaming model they have noted... though I won't keep hopes up too much.
For those interested in it or just need to reference the larger model here is my space, though there are many good ones still running.
[Conference Generator VibeVoice](https://huggingface.co/spaces/ACloudCenter/Conference-Generator-VibeVoice) | 2025-09-05T08:27:25 | Cipher_Lock_20 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8zn89 | false | null | t3_1n8zn89 | /r/LocalLLaMA/comments/1n8zn89/vibevoice_rip_not_with_this_community/ | false | false | 85 | {'enabled': True, 'images': [{'id': 'hgVKiVl-l2WO8E9tp6PNRoWxLwcUDtLFMo18RU4WKZA', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/vnwua2k0zanf1.png?width=108&crop=smart&auto=webp&s=fe1d0194f96e5f3ef7d962396a054030dbb06c55', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/vnwua2k0zanf1.png?width=216&crop=smart&auto=webp&s=4db74a8d027bf2893a8f5644ec5d083a016f1e31', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/vnwua2k0zanf1.png?width=320&crop=smart&auto=webp&s=afdc5675965252dbcc5559448f4b27848fed4c5a', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/vnwua2k0zanf1.png?width=640&crop=smart&auto=webp&s=560d3e188bc6c42ff9082af9f34312ed7bbf964e', 'width': 640}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/vnwua2k0zanf1.png?auto=webp&s=4881b687e4131afa046854af05e26f9f46840b3e', 'width': 675}, 'variants': {}}]} | ||
Where can I download VibeVoice-Large (9B) now that Microsoft deleted it? | 3 | Hi all,
I’m trying to get **VibeVoice-Large (the \~9B parameter version)** running locally. I know Microsoft deleted it from GitHub and HuggingFace, but I’ve seen that some people are still running it.
👉 My goals:
* Download the **exact model weights** for VibeVoice-Large (not 1.5B, I want the biggest one).
* Run it either in its **original WebUI (Gradio)** or just directly from the command line.
* I **don’t want ComfyUI or wrappers**, just the plain WebUI or CLI method.
Does anyone know where I can still download the 9B model files (maybe ModelScope or a mirror), and if there’s a repo that still has the WebUI code intact?
Thanks in advance 🙏 | 2025-09-05T08:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n8zi98/where_can_i_download_vibevoicelarge_9b_now_that/ | Forsaken-Turnip-6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8zi98 | false | null | t3_1n8zi98 | /r/LocalLLaMA/comments/1n8zi98/where_can_i_download_vibevoicelarge_9b_now_that/ | false | false | self | 3 | null |
Top 10 Vector Databases for RAG Applications | 0 | 2025-09-05T07:59:24 | https://www.blog.qualitypointtech.com/2025/09/top-10-vector-databases-for-rag.html | qptbook | blog.qualitypointtech.com | 1970-01-01T00:00:00 | 0 | {} | 1n8z804 | false | null | t3_1n8z804 | /r/LocalLLaMA/comments/1n8z804/top_10_vector_databases_for_rag_applications/ | false | false | 0 | {'enabled': False, 'images': [{'id': '0LprDXgkEyi9SAorwIHQLLxvMJiPt-bKxepfY8-J8P8', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/0LprDXgkEyi9SAorwIHQLLxvMJiPt-bKxepfY8-J8P8.jpeg?width=108&crop=smart&auto=webp&s=1b37215c771cbffd09dc7b9dbf159889fbd350c9', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/0LprDXgkEyi9SAorwIHQLLxvMJiPt-bKxepfY8-J8P8.jpeg?width=216&crop=smart&auto=webp&s=b23882b5d73cfad8785bb49e779bac88a7522214', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/0LprDXgkEyi9SAorwIHQLLxvMJiPt-bKxepfY8-J8P8.jpeg?width=320&crop=smart&auto=webp&s=1412843bd924af63224cba4f4984c910f9634180', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/0LprDXgkEyi9SAorwIHQLLxvMJiPt-bKxepfY8-J8P8.jpeg?width=640&crop=smart&auto=webp&s=3ed8b9b3bc9571486f9ac92db778ff1352bf4f9d', 'width': 640}], 'source': {'height': 533, 'url': 'https://external-preview.redd.it/0LprDXgkEyi9SAorwIHQLLxvMJiPt-bKxepfY8-J8P8.jpeg?auto=webp&s=c0162b2b629a424d1196856f9afbf368ca23a433', 'width': 800}, 'variants': {}}]} | ||
Tried Examsprint with Llama-based Q&A — surprisingly handy! | 1 | [removed] | 2025-09-05T07:46:23 | bad_user_dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8z12b | false | null | t3_1n8z12b | /r/LocalLLaMA/comments/1n8z12b/tried_examsprint_with_llamabased_qa_surprisingly/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '0j73tu7owanf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?width=108&crop=smart&auto=webp&s=e84e2fa73073bd6ee7f9cff42bc15d56984b2924', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?width=216&crop=smart&auto=webp&s=beb96bdf6e089cbbf6e1bfc7abfe74095a13d881', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?width=320&crop=smart&auto=webp&s=fd156747473f1cc96671b44423ce6e6930998753', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?width=640&crop=smart&auto=webp&s=6fedd6ce1efb15e9578c630493e5baccba0ec04c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?width=960&crop=smart&auto=webp&s=33cfdbba993af0537265e5abcc81b9c670f70012', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?width=1080&crop=smart&auto=webp&s=dc0d7cad198d926c3a0742465cfda4ce8e722ec8', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/0j73tu7owanf1.jpeg?auto=webp&s=f5579eaaaa66394f14af5020567cd87afa227452', 'width': 1080}, 'variants': {}}]} | |
Agentic AI feels like a new teammate in dev work. Anyone else seeing this? | 0 | I have been trying some of these new agentic AI tools that don’t just suggest code but actually plan, write, and test parts of it on their own.
What stood out to me is how it changes the way our team works. Junior devs are not stuck on boilerplate anymore; they review what the AI writes. Seniors spend more time guiding and fixing instead of coding every line themselves.
Honestly, it feels like we added a new teammate who works super fast but sometimes makes odd mistakes.
Do you think this is where software development is heading with us acting more like reviewers and architects than coders? Or is this just hype that will fade out? | 2025-09-05T07:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1n8yz7s/agentic_ai_feels_like_a_new_teammate_in_dev_work/ | Alone_Course_2660 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8yz7s | false | null | t3_1n8yz7s | /r/LocalLLaMA/comments/1n8yz7s/agentic_ai_feels_like_a_new_teammate_in_dev_work/ | false | false | self | 0 | null |
Claude code level local llm | 3 | Hey guys I have been a local llm guy to the bone, love the stuff, I mean my system has 144gb of vram with 3x 48gb pro GPUs. However, when using clause and claude code recently at the $200 level, I notice I have not seen anything like it yet with local action,
I would be more than willing to aim to upgrade my system, but I need to know:
A) is there anything claude/claude code level for current release
B) will there be in the future
And c) while were at itt, same questionion for chatGPT agent,
If it were not for these three things, I would be doing everything locally,,, | 2025-09-05T07:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/1n8yvfn/claude_code_level_local_llm/ | EasyConference4177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8yvfn | false | null | t3_1n8yvfn | /r/LocalLLaMA/comments/1n8yvfn/claude_code_level_local_llm/ | false | false | self | 3 | null |
For PHILOSOPHY, which one wins: GPT-5 or Gemini 2.5 Pro? | 0 | I like to ask LLMs philosophical questions or questions about history of philosophy. Which one is better for you right now? I used to think Gemini was smarter, but GPT-5 made me doubt because it gives lengthier, wiser answers. | 2025-09-05T07:31:24 | Upbeat-Impact-6617 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8yt9n | false | null | t3_1n8yt9n | /r/LocalLLaMA/comments/1n8yt9n/for_philosophy_which_one_wins_gpt5_or_gemini_25/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '1hj8eozztanf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/1hj8eozztanf1.jpeg?width=108&crop=smart&auto=webp&s=7ed4835a37d4a1bc4ed14326212dc0dbfc20e331', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/1hj8eozztanf1.jpeg?width=216&crop=smart&auto=webp&s=b02c5b2071dbdb3de24fad98caf4a91ce10dcfcf', 'width': 216}], 'source': {'height': 168, 'url': 'https://preview.redd.it/1hj8eozztanf1.jpeg?auto=webp&s=3e14462ffbe7b83fd8e09b6a40f54b3c4c75ca61', 'width': 300}, 'variants': {}}]} | |
Could English be making LLMs more expensive to train? | 0 | What if part of the reason bilingual models like DeepSeek (trained on Chinese + English) are cheaper to train than English-heavy models like GPT is because English itself is just harder for models to learn efficiently?
Here’s what I mean, and I’m curious if anyone has studied this directly:
English is irregular. Spelling/pronunciation don’t line up (“though,” “tough,” “through”). Idioms like “spill the beans” are context-only. This adds noise for a model to decode.
Token inefficiency. In English, long words often get split into multiple subword tokens (“unbelievable” un / believ / able), while Chinese characters often carry full semantic meaning and stay as single tokens. Fewer tokens = less compute.
Semantic ambiguity. English words have tons of meanings; “set” has over 400 definitions. That likely adds more training overhead
Messy internet data. English corpora (Reddit, Twitter, forums) are massive but chaotic. Some Chinese models might be trained on more curated or uniform sources, easier for an LLM to digest?
So maybe it’s not just about hardware, model architecture, or training tricks, maybe the language itself influences how expensive training becomes?
Not claiming to be an expert, just curious. Would love to hear thoughts from anyone working on multilingual LLMs or tokenization. | 2025-09-05T07:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n8yjid/could_english_be_making_llms_more_expensive_to/ | Puzzled-Ad-1939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8yjid | false | null | t3_1n8yjid | /r/LocalLLaMA/comments/1n8yjid/could_english_be_making_llms_more_expensive_to/ | false | false | self | 0 | null |
Just had a thought, could English be hurting LLM training efficiency? | 1 | [removed] | 2025-09-05T07:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n8yi66/just_had_a_thought_could_english_be_hurting_llm/ | NotoAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8yi66 | false | null | t3_1n8yi66 | /r/LocalLLaMA/comments/1n8yi66/just_had_a_thought_could_english_be_hurting_llm/ | false | false | self | 1 | null |
Why does Qwen have trouble understanding online sources? | 1 | Qwen struggles to understand online articles, even when dates are right there. Sometimes the article implies the date from its context. For example:
>*President Trump on* ***Friday*** *filed a libel lawsuit...*
[Source - CBS News](https://www.cbsnews.com/news/trump-lawsuit-wall-street-journal-jeffrey-epstein-birthday-letter/) \- Published on July 19, 2025. Lawsuit filed **July 18, 2025**
It seems like Qwen relies heavily on its trained data rather than using outside information, such as the search tool. When Qwen thinks, it gets close but loses it. Qwen isn't the only open-source model that has this problem with search; I've noticed that GPT-OSS 120b provides the dates and sources correctly through its searches. I'm curious about why Qwen and some other open-source models struggle with this.
| 2025-09-05T07:08:55 | https://www.reddit.com/gallery/1n8yh0x | Fresh_Sun_1017 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n8yh0x | false | null | t3_1n8yh0x | /r/LocalLLaMA/comments/1n8yh0x/why_does_qwen_have_trouble_understanding_online/ | false | false | 1 | null | |
Anyone tried Kimi-K2-Instruct-0905 | 51 | 2025-09-05T06:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n8xjyz/anyone_tried_kimik2instruct0905/ | Trilogix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8xjyz | false | null | t3_1n8xjyz | /r/LocalLLaMA/comments/1n8xjyz/anyone_tried_kimik2instruct0905/ | false | false | 51 | {'enabled': False, 'images': [{'id': 'mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?width=108&crop=smart&auto=webp&s=7fbcd9f99cdbff4d896ea5c9cf494898c818d37b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?width=216&crop=smart&auto=webp&s=b3867a115d87d89539da9a96510c5d74edfc1f6d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?width=320&crop=smart&auto=webp&s=dd8c03a6905408b75db64d43f6ceeb0c2080d769', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?width=640&crop=smart&auto=webp&s=9e333f9a0c4615a8127b7ccd9a8cb99907d4fd0c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?width=960&crop=smart&auto=webp&s=ed572f042597bb1a43cee493e3afd139f6de510b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?width=1080&crop=smart&auto=webp&s=50618159da0b0ec42ff49603aeec17a5ce85543d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mXYZrSpWdi_J0TO3KhxDDA-lUjaP-Q9xljCeZ69GF1k.png?auto=webp&s=804ecf6aebc7e05acb25cf13c697fe45cccee370', 'width': 1200}, 'variants': {}}]} | ||
Any good resources on model architectures like Nano Banana (gemini), or image+text models? | 2 | I’ve been trying to wrap my head around how some of these newer models are built, like Nano banana, or any image generation models that can take both text and image as input. I’m curious about the actual architecture behind them, how they’re designed, what components they use, and how they manage to combine multiple modalities.
Does anyone know of good resources (articles, blogs, or even YouTube videos) that explains these type of models. | 2025-09-05T05:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n8x1gv/any_good_resources_on_model_architectures_like/ | DataScientia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8x1gv | false | null | t3_1n8x1gv | /r/LocalLLaMA/comments/1n8x1gv/any_good_resources_on_model_architectures_like/ | false | false | self | 2 | null |
I've made some fun demos using the new kimi-k2-0905 | 169 | They were all created with a single-pass, AI-generated prompt using both claude-code and kimi-k2-0905. | 2025-09-05T05:35:25 | https://v.redd.it/wavkswkz7anf1 | Dr_Karminski | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8wyla | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wavkswkz7anf1/DASHPlaylist.mpd?a=1759642544%2CYjY0MjY5MWE0Yzc1YzBiYjUyMGY0ZGRkMTEyMzdlNDEzYWFjYTU2OTc1MTlkNTg4MTRiNzJmZWZhMDE3MzNmMg%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/wavkswkz7anf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wavkswkz7anf1/HLSPlaylist.m3u8?a=1759642544%2CMTRhNTIxYzlhZWE5NGFhNDAwMmI4MTY4MGFhNDA5OGFkNDc1YmVhY2Q5NzBiOWM4MjhhNDYwODY5OWQxMjM3Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wavkswkz7anf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n8wyla | /r/LocalLLaMA/comments/1n8wyla/ive_made_some_fun_demos_using_the_new_kimik20905/ | false | false | 169 | {'enabled': False, 'images': [{'id': 'aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?width=108&crop=smart&format=pjpg&auto=webp&s=32645d021c077ab6d245e131aa95332db1bdcfcb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?width=216&crop=smart&format=pjpg&auto=webp&s=224279359a4c9c92cac6badcda893cda9b61f359', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?width=320&crop=smart&format=pjpg&auto=webp&s=9526ae960562cd90af56da2f6368170a58a0e3c7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?width=640&crop=smart&format=pjpg&auto=webp&s=4971acb8364c81606d3f4eb2d5ab03b3f0fd064a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?width=960&crop=smart&format=pjpg&auto=webp&s=5e3dea6f42610ab8d9cdd9cdfa5e56ae847e42d1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=61a4d41fdedcde88238825d38be08db9bf2ef78f', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/aGZ4NjJ2a3o3YW5mMQ6wDUEz-v_Nzg5h_KpwfXxI3dQiiTxqUDt15pQk26OB.png?format=pjpg&auto=webp&s=e1a55132d73005c7ee3e3a45f5bdd46152c61646', 'width': 2560}, 'variants': {}}]} | |
Open WebUI mock up - tacky or cool? | 4 | Vibe coded this, taking major inspiration from Grok’s ui. I would be very happy to see this every day in my chats. Open WebUI team, any thoughts? | 2025-09-05T05:04:38 | https://v.redd.it/4uborq3s3anf1 | 3VITAERC | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8wfrq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4uborq3s3anf1/DASHPlaylist.mpd?a=1759640694%2CODE4OGU1MWYyMzU3MTE1NmJiZDEyNmRlNjVkNTE4N2RmNTM1NjBiMzA4ZWE4MTdkNmYyMTNhODU1Zjk5NzM1Ng%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/4uborq3s3anf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4uborq3s3anf1/HLSPlaylist.m3u8?a=1759640694%2CZDkyZThkYWRkOWYzN2JkMTk0ODQxZmQ2YjliZjk0M2ViOTFkZmJiMjNjNDMzMjY1ZjA0NjY2MmFkNzFjZTdhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4uborq3s3anf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n8wfrq | /r/LocalLLaMA/comments/1n8wfrq/open_webui_mock_up_tacky_or_cool/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?width=108&crop=smart&format=pjpg&auto=webp&s=54997666532757ace24633c5932cfac2a0941267', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?width=216&crop=smart&format=pjpg&auto=webp&s=f2d4763b2dc178430d2d40afc04c3157d6dacafa', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?width=320&crop=smart&format=pjpg&auto=webp&s=dc63b5e04c6d246b629a4a3436005eda5200ba4f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?width=640&crop=smart&format=pjpg&auto=webp&s=432781ac55fdadd2fdd44a06f9597f89d33dd262', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?width=960&crop=smart&format=pjpg&auto=webp&s=7d968bd0347170251db7a23bc658891bdaaa35f6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4e0520ff9c65e5ca6fd2fc2849d979f39226202b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YzAzanhzZXIzYW5mMe30UDaPjnOrwC11UuFdT4jEYN-aYkBgfpq_eW4X6lzV.png?format=pjpg&auto=webp&s=0cb88001baefc6137f94bcb8b2ca70aac4507dec', 'width': 1920}, 'variants': {}}]} | |
Is there any way to create consistent illustrations or comics from a story script? If not, any advice on how to achieve this myself? | 0 | Wondering if there’s any way or tool out to turn a story script into a bunch of consistent illustrations or comic panels, like keeping the same characters and style across the whole thing. If no readymade solution exists, I’d really appreciate any tips or ideas on how to create something like this myself. | 2025-09-05T04:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n8vixb/is_there_any_way_to_create_consistent/ | kaamalvn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8vixb | false | null | t3_1n8vixb | /r/LocalLLaMA/comments/1n8vixb/is_there_any_way_to_create_consistent/ | false | false | self | 0 | null |
Has anyone successfully fine-tuned Deepseek V3? | 0 | My most recent attempt was 8xH200 with LLaMA Factory, and LoRA training would OOM even at toy context lengths (512)
I'm willing to rent 8xB200 or whatever it takes but it felt like the issues I was running to were more broken support than expected OOMs | 2025-09-05T03:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n8urp3/has_anyone_successfully_finetuned_deepseek_v3/ | SpiritualWindow3855 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8urp3 | false | null | t3_1n8urp3 | /r/LocalLLaMA/comments/1n8urp3/has_anyone_successfully_finetuned_deepseek_v3/ | false | false | self | 0 | null |
Question about how to totally remove llm | 0 | So on my old PC I had my ai setup I transfered over my ssds so it transfered with them. But my problem is I don't remember my passwords and stuff to add more models. How can I delete everything and reinstall from scratch? I have done some of the deleting but not sure I got it all. Any help I can't reformat my SSDs that is not an option. | 2025-09-05T03:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/1n8umdr/question_about_how_to_totally_remove_llm/ | Mountain-Suit7304 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8umdr | false | null | t3_1n8umdr | /r/LocalLLaMA/comments/1n8umdr/question_about_how_to_totally_remove_llm/ | false | false | self | 0 | null |
Kimi-K2-Instruct-0905 Released! | 810 | 2025-09-05T03:15:27 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8ues8 | false | null | t3_1n8ues8 | /r/LocalLLaMA/comments/1n8ues8/kimik2instruct0905_released/ | false | false | default | 810 | {'enabled': True, 'images': [{'id': '6jq7r55ak9nf1', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?width=108&crop=smart&auto=webp&s=71629269e8466020071f38ee56f529c87dc8a350', 'width': 108}, {'height': 298, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?width=216&crop=smart&auto=webp&s=090773e48ff07039ce2a2d39399dfc6eda68f1d1', 'width': 216}, {'height': 442, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?width=320&crop=smart&auto=webp&s=c894852a19fd14ae6ac2507cf332039bba851636', 'width': 320}, {'height': 884, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?width=640&crop=smart&auto=webp&s=0a5eec08b8c7bedbb50e39a668de98e599c3a0b6', 'width': 640}, {'height': 1326, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?width=960&crop=smart&auto=webp&s=5b0acaf5560e28b94eb371e03540b9cd9da5cbd0', 'width': 960}, {'height': 1492, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?width=1080&crop=smart&auto=webp&s=34d626bf6c3c9609097dc78dc2494ca8ebc02f0d', 'width': 1080}], 'source': {'height': 2095, 'url': 'https://preview.redd.it/6jq7r55ak9nf1.png?auto=webp&s=a2446650106bbdc06306a75a1f03a03ff9e85cc2', 'width': 1516}, 'variants': {}}]} | ||
Introducing EmbeddingGemma: The Best-in-Class Open Model for On-Device Embeddings- Google Developers Blog | 7 | 2025-09-05T02:53:03 | https://developers.googleblog.com/en/introducing-embeddinggemma/ | richardanaya | developers.googleblog.com | 1970-01-01T00:00:00 | 0 | {} | 1n8tym3 | false | null | t3_1n8tym3 | /r/LocalLLaMA/comments/1n8tym3/introducing_embeddinggemma_the_bestinclass_open/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?width=108&crop=smart&auto=webp&s=aac0a0c3f6b3c1e28c89c34d9af96c8e738a347d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?width=216&crop=smart&auto=webp&s=4e9f3fd913246deeff3ab93c70d5489069b20421', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?width=320&crop=smart&auto=webp&s=55682c264a21917d361af860a70c028364b8fa6b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?width=640&crop=smart&auto=webp&s=25f438fe039f96afc22af657623fb3d5cc7ef067', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?width=960&crop=smart&auto=webp&s=cd0f612812e23db5bd9a9e5a99caab2671a62d3d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?width=1080&crop=smart&auto=webp&s=8a98351617faedb0e90f9c68a012e549711679f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YVJV2K5BxWqUcGq19dxvYXP4zu_HpvAC56XyJ41zblc.jpeg?auto=webp&s=7aad633d9398ba7ecc2b5795adc7f9eb09f9d59e', 'width': 1200}, 'variants': {}}]} | |
Local AI made a local conversational LLM work on the iPhones! Not even Apple did this yet | 0 | I've been looking for this for quite a while. It's a little primitive, can't read numbers yet, but it's already useful for me, long drives without internet etc... | 2025-09-05T02:23:50 | https://twitter.com/adrgrondin/status/1963658863401689333 | Vast-Piano2940 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1n8tcyu | false | null | t3_1n8tcyu | /r/LocalLLaMA/comments/1n8tcyu/local_ai_made_a_local_conversational_llm_work_on/ | false | false | default | 0 | null |
Is there any fork of openwebui that has an installer for windows? | 2 |
Is there a version of openwebui with an installer, for command-illiterate people? | 2025-09-05T01:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n8shex/is_there_any_fork_of_openwebui_that_has_an/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8shex | false | null | t3_1n8shex | /r/LocalLLaMA/comments/1n8shex/is_there_any_fork_of_openwebui_that_has_an/ | false | false | self | 2 | null |
Th AI/LLM race is absolutely insane | 208 | Just look at the past 3 months. We’ve had so many ups and downs in various areas of the field. The research, the business side, consumer side etc.
Now 6 months: Qwen coder, GLM models, new grok models, then recently nanobanana, with gpt 5 before it, then they dropped an improved codex, meanwhile across the board independent services are providing api access to some models too heavy to be hosted locally. Every day a new deal about ai is being made. Where is this all even heading to? Are we just waiting to watch the bubble blow up? Or are LLMs just going to be another thing before the next thing ?
Companies pooring billions up billions into this whole race,
Every other day something new drop, new model, new techniques, new way lf increasing tps, etc.
We’re really witnessing something crazy.
What part of this whole picture are you? Trying to make a business out of it ? Personal usage ?
| 2025-09-05T01:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n8rytq/th_aillm_race_is_absolutely_insane/ | No-Underscore_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8rytq | false | null | t3_1n8rytq | /r/LocalLLaMA/comments/1n8rytq/th_aillm_race_is_absolutely_insane/ | false | false | self | 208 | null |
Finetuning on Message Between Me and Friend | 1 | Hey all, I want to fine tune a model on some chat history between me and a friend so I can generate conversation responses betweeen the two of us. I was initially going to use a vanilla model and finetuned gemma-2-9b-it with meh results. Would I have deeper more unfiltered convos with a jailbroken model? Was worried it might be harder to finetune/less resources to set up. I am cost sensitive cloud user.
Conversely, would I have better experience finetuning with a different base model? I tried to use Gemma 3 but struggled with ensuring the requirements all matched for my training- for some reason kept running into issues. Also annoying how each model has their own finetuning chat template and Im not sure which is which. | 2025-09-05T01:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/1n8rug0/finetuning_on_message_between_me_and_friend/ | AdLeather8620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8rug0 | false | null | t3_1n8rug0 | /r/LocalLLaMA/comments/1n8rug0/finetuning_on_message_between_me_and_friend/ | false | false | self | 1 | null |
Has anyone tried building a multi-MoE architecture where the model converges, then diverges, then reconverges ext more then one routing let's says each export has multi others experts into it ? | 0 | Is this something that already exists in research, or has anyone experimented with this type of MoE inside MoE ? | 2025-09-05T00:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n8rgwc/has_anyone_tried_building_a_multimoe_architecture/ | somthing_tn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8rgwc | false | null | t3_1n8rgwc | /r/LocalLLaMA/comments/1n8rgwc/has_anyone_tried_building_a_multimoe_architecture/ | false | false | self | 0 | null |
new stealth model carrot 🥕, works well for coding | 56 | 2025-09-05T00:38:24 | Illustrious_Row_9971 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8r5fj | false | null | t3_1n8r5fj | /r/LocalLLaMA/comments/1n8r5fj/new_stealth_model_carrot_works_well_for_coding/ | false | false | 56 | {'enabled': True, 'images': [{'id': 'zlRSXmelO53EvmGxo1RBTrPE76O5JWeJEJ5MrdKcWvM', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?width=108&crop=smart&auto=webp&s=c9da37f94cd8275c98062be8949e5066ee2bf490', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?width=216&crop=smart&auto=webp&s=2314499de81d58626b0178b14b9159abb9cd39b0', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?width=320&crop=smart&auto=webp&s=3639da2b61929d42eac270b57210abfdf1024334', 'width': 320}, {'height': 636, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?width=640&crop=smart&auto=webp&s=f91cc956af85e049c81bcbf8619265b81e1b1d7d', 'width': 640}, {'height': 955, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?width=960&crop=smart&auto=webp&s=4f3d8fc7936e40ea8720f8d5b9a45d92c54f7ef9', 'width': 960}, {'height': 1074, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?width=1080&crop=smart&auto=webp&s=ba537220b9460e921514a76518a1e03cf1ac1589', 'width': 1080}], 'source': {'height': 1198, 'url': 'https://preview.redd.it/v1itltb5s8nf1.png?auto=webp&s=8ec73ddf0e49114cd0485765677b59e37d8a746e', 'width': 1204}, 'variants': {}}]} | |||
old mining rig vulkan llama.cpp optimization | 1 | hello everyone!!
so I have a couple of old rx580s I’ve used for eth mining and I was wondering if they would be useful for local inference.
i tried various endless llama.cpp options building with rocm and vulkan and got to the conclusion that vulkan is best suited for my setup since my motherboard doesn’t support atomics operations necessary for rocm to run more than 1 gpu.
I managed to pull off some nice speeds with Qwen-30B but I still feel like there’s a lot of room for improvement since a recent small change in llama.cpp’s code bumped up the prompt processing from 30 tps to 180 tps. (change in question was related to mulmat id subgroup allocation)
i’m wondering if there are optimizations that can be done a case by case basis in order to push for greater pp/pg speeds.
i’m don’t know how to read vulkan debug logs / understand how shaders work / what the limitations of the system are and how they could be theoretically be pushed through llama.cpp custom code optimizations specifically tailored for parallel running rx580s.
i’m looking for someone that can help me!
any pointers would be greatly appreciated! thanks in advance! | 2025-09-04T23:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n8prq2/old_mining_rig_vulkan_llamacpp_optimization/ | MidasCapital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8prq2 | false | null | t3_1n8prq2 | /r/LocalLLaMA/comments/1n8prq2/old_mining_rig_vulkan_llamacpp_optimization/ | false | false | self | 1 | null |
Why Is Quantization Not Typically Applied to Input and Output Embeddings? | 3 | As far as I can tell, method like SpinQuant [don't quantize the embeddings](https://github.com/facebookresearch/SpinQuant/blob/main/train_utils/modeling_llama_quant.py#L1279) and leave them at high precision.
For [4-bit quantized Llama-3.1-1B](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit), the unquantized embeddings are taking up about half of the model's memory!
Does quantizing the embeddings really hurt performance that much? Are there any methods that *do* quantize the embeddings? | 2025-09-04T23:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n8p6dt/why_is_quantization_not_typically_applied_to/ | THE_ROCKS_MUST_LEARN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8p6dt | false | null | t3_1n8p6dt | /r/LocalLLaMA/comments/1n8p6dt/why_is_quantization_not_typically_applied_to/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?width=108&crop=smart&auto=webp&s=5caab4869dc281743a0da8f45e66b58bf75fd566', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?width=216&crop=smart&auto=webp&s=ec6782be97eceac28f0dce1b7784cf6fa65246a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?width=320&crop=smart&auto=webp&s=8c1807985e5d578e1f353f549b868200ef6dd6ee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?width=640&crop=smart&auto=webp&s=718837ac400de230690981b3d64cd7bfc4f12bf0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?width=960&crop=smart&auto=webp&s=be7e58954b4d78d583b0ebeaa2bdcc9b73ea9520', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?width=1080&crop=smart&auto=webp&s=dcb38cbe131bf53d6f0dbfdf848aa46c95d372c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-0HTjBp3IaFsOTSJafdaCC8Rb4LPFahmojXrwtc0XyM.png?auto=webp&s=ad910bda32045c3a70c91aa631bae5bff1cf4568', 'width': 1200}, 'variants': {}}]} |
DO NOT UPDATE YOUR LMSTUDIO IF YOU USE GPT-OSS | 0 | Reached out to their support and they aren’t even sure but they think update 1.50 (llama.cpp) broke it.
The latest update has caused GPT-OSS to completely ignore all reasoning logic you will waste your time trying to fix it if you update | 2025-09-04T22:18:01 | YTLupo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8nza0 | false | null | t3_1n8nza0 | /r/LocalLLaMA/comments/1n8nza0/do_not_update_your_lmstudio_if_you_use_gptoss/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'wt0qrcq938nf1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?width=108&crop=smart&auto=webp&s=5fcbab0ead56e72c64d8c41504805d70c0699871', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?width=216&crop=smart&auto=webp&s=fea5ee929dfa9ef735c752807da0e6703709926e', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?width=320&crop=smart&auto=webp&s=ebe38d45927b0f9a112c8c9dff676abb7d75e331', 'width': 320}, {'height': 800, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?width=640&crop=smart&auto=webp&s=ffcb372e32273e48479d694693d379fa69ff6e02', 'width': 640}, {'height': 1201, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?width=960&crop=smart&auto=webp&s=74d267b87c35e1677a7c18852a7c5576a190825a', 'width': 960}, {'height': 1351, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?width=1080&crop=smart&auto=webp&s=702f2635e052ab1f022e1f092536bc8c0ec6cf01', 'width': 1080}], 'source': {'height': 1464, 'url': 'https://preview.redd.it/wt0qrcq938nf1.jpeg?auto=webp&s=38b22c6f838cb054ed151b2164390975f2ee80f9', 'width': 1170}, 'variants': {}}]} | |
Summary of August big events | 70 | ## August 2025
- Google introduced **Gemini 2.5 Deep Think**, a special "extended thinking" mode for solving complex problems and exploring alternatives. (*special*)
- Anthropic released **Claude Opus 4.1**, an upgrade focused on improving agentic capabilities and real-world coding.
- Google DeepMind announced **Genie 3.0**, a "world model" for creating interactive 3D environments from text, maintaining consistency for several minutes. (*special*)
- OpenAI released **gpt-oss-120b** and **gpt-oss-20b**, a family of open-source models with high reasoning capabilities, optimized to run on accessible hardware.
- OpenAI launched **GPT-5**, the company's next-generation model, with significant improvements in coding and a dynamic "thinking" mode to reduce hallucinations.
- DeepSeek released **DeepSeek V3.1**, a hybrid model combining fast and slow "thinking" modes to improve performance in agentic tasks and tool use.
- Google launched a preview of **Gemini 2.5 Flash Image** (showcased as *nano-banana*), an advanced model for precise image editing, merging, and maintaining character consistency. (*special*) | 2025-09-04T22:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/1n8nr7y/summary_of_august_big_events/ | nh_local | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8nr7y | false | null | t3_1n8nr7y | /r/LocalLLaMA/comments/1n8nr7y/summary_of_august_big_events/ | false | false | self | 70 | null |
Converted my unused laptop into a family server for gpt-oss 20B | 181 | I spent few hours on setting everything up and asked my wife (frequent chatGPT user) to help with testing. We're very satisfied so far.
**Keys specs:**
Generation: 46-40 t/s
Context: 20K
Idle power: 2W (around 5 EUR annually)
Generation power: 38W
**Hardware:**
2021 m1 pro macbook pro 16GB
45W GaN charger
Power meter
**Challenges faced:**
Extremely tight model+context fit into 16GB RAM
Avoiding laptop battery degradation in 24/7 plugged mode
Preventing sleep and autoupdates
Accessing the service from everywhere
**Tools used:**
Battery Toolkit
llama.cpp server
DynDNS
Terminal+SSH (logging into GUI isn't an option due to RAM shortage)
**Thoughts on gpt-oss:**
Very fast and laconic thinking, good instruction following, precise answers in most cases. But sometimes it spits out very strange factual errors never seen even in old 8B models, it might be a sign of intentional weights corruption or "fine-tuning" of their commercial o3 with some garbage data | 2025-09-04T21:48:48 | https://www.reddit.com/r/LocalLLaMA/comments/1n8n9u6/converted_my_unused_laptop_into_a_family_server/ | Vaddieg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8n9u6 | false | null | t3_1n8n9u6 | /r/LocalLLaMA/comments/1n8n9u6/converted_my_unused_laptop_into_a_family_server/ | false | false | self | 181 | null |
Did someone save the weights & code of Microsoft's VibeVoice 7B? | 2 | 2025-09-04T21:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n8mprp/did_someone_save_the_weights_code_of_microsofts/ | UltrMgns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8mprp | false | null | t3_1n8mprp | /r/LocalLLaMA/comments/1n8mprp/did_someone_save_the_weights_code_of_microsofts/ | false | false | 2 | null | ||
Turn any API into an MCP server automatically | 1 | [removed] | 2025-09-04T21:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n8mi2p/turn_any_api_into_an_mcp_server_automatically/ | ynilayy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8mi2p | false | null | t3_1n8mi2p | /r/LocalLLaMA/comments/1n8mi2p/turn_any_api_into_an_mcp_server_automatically/ | false | false | self | 1 | null |
built-in tools with vllm & gptoss | 3 | Did someone managed to use built-in tools as described here [GPT OSS - vLLM](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#usage) ?
I'm running this simple example server:
mcp = FastMCP(
name="dice",
instructions="Tool for rolling dice. Example: roll a 6-sided dice.",
host="0.0.0.0",
port=8001,
)
u/mcp.tool(
name="roll",
title="Roll a dice",
description="Rolls a dice with `sides` number of faces (default=6).",
)
async def roll(ctx: Context, sides: int = 6) -> str:
"""Roll a dice and return the result"""
if sides < 2:
return "Dice must have at least 2 sides."
result = random.randint(1, sides)
return f"You rolled a {result} on a {sides}-sided dice."
and vllm like this:
vllm:
container_name: vllm
image: vllm/vllm-openai:v0.10.1.1
security_opt:
- label=disable
ipc: host
runtime: nvidia
deploy:
replicas: 1
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- "/home/fdifazio/.cache/huggingface/hub/models--openai--gpt-oss-20b:/model:ro"
ports:
- "8000:8000"
command: >
--model=/model/snapshots/f4770b2b29499b3906b1615d7bffd717f167201f/ --host=0.0.0.0 --tool-server mcpserver:8001 --port=8000 --enforce-eager --served-model-name gptoss-20b --gpu-memory-utilization 0.95 --max-model-len 16384
the "--tool-server" part is working, in the vllm startup log is can see
(APIServer pid=1) INFO 09-04 13:08:27 [tool_server.py:135] MCPToolServer initialized with tools: ['dice']
(APIServer pid=1) WARNING 09-04 13:08:27 [serving_responses.py:137] For gpt-oss, we ignore --enable-auto-tool-choice and always enable tool use.
Still, the mcp server didn't get called. Tried various ways, with python openai and curl, like
curl http://localhost:8000/v1/responses -H 'Content-Type: application/json' -d '{
"model":"gptoss-20b",
"input":[
{"role":"system","content":"You can use one MCP tool: dice.roll(sides:int). Use it whenever the user asks to roll dice."},
{"role":"user","content":"Roll a 6-sided die and return only the number."}
],
"reasoning": {"effort": "high"},
"tool_choice":"auto"
}'
with this output, but no call to the mcp server:
{
"id": "resp_1142e762bc32448aaf0870855af35403",
"created_at": 1757019581,
"instructions": null,
"metadata": null,
"model": "gptoss-20b",
"object": "response",
"output": [
{
"id": "rs_f24e5a82122642a6bce561e1e1814bd3",
"summary": [],
"type": "reasoning",
"content": [
{
"text": "We need to use dice.roll(sides:int). The user specifically says \"Roll a 6-sided die and return only the number.\"\n\nWe must provide only the number. Use the dice.roll function presumably returns a number between 1 and 6 inclusive. So we call dice.roll(6). Then output the number. In interactions, we should not include extraneous text: \"only the number.\"\n\nHence the answer should be just the number. But **instructions**: \"Use one MCP tool: dice.roll(sides:int). Use it whenever the user asks to roll dice.\" So we should call the tool. But first we must produce a request to the tool. In the output, we need to have the use of the tool. The instruction says: \"Use one MCP tool: dice.roll(sides:int). Use it whenever the user asks to roll dice.\" That means we must use the tool. The tool will produce an object. So we should produce a tool call with name: dice.roll and arguments: {\"sides\": 6}. Then the tool's response will come after.\n\nThe correct format of the tool call is indicating the tool usage. In the conversation, the assistant prints: \n\n```\n{\n \"name\": \"dice.roll\",\n \"arguments\": {\n \"sides\": 6\n }\n}\n```\n\nBut we shouldn't output that; we should produce the tool request. Actually we should output as a tool request format. Provide the JSON. According to typical tool usage, the assistant sends a message with the function call. So we highlight that.\n\nThus we need to do:\n\n```\n{\n \"name\": \"dice.roll\",\n \"arguments\": {\n \"sides\": 6\n }\n}\n```\n\nThen presumably the tool will return a number, say 3. After receiving that, we will output \"3\".\n\nHowever, the instruction says: \"Return only the number.\" It doesn't say the tool call information should not be part. But tests might expect just the number. Many tasks revolve around returning only the number. But they also mention tool usage. Actually the test harness will parse tool calls. The assistant's final output after tool returns should be just the number. So we need to call the tool.\n\nHence step 1: produce a tool call. Step 2: after tool response, produce just the number. But we can't produce the number until we get tool output.\n\nSo the assistant's first message will be the tool call. Then after tool invocation, the system will likely produce a tool output with {\"result\": <value>}. Then we respond accordingly with only the number.\n\nBut at the top, the user requests \"Roll a 6-sided die and return only the number.\" So we need to output only the number. But for internal exchange, we must call the tool.\n\nOk. Let me think. The environment is \"MCP\" style. Should we produce a single output with tool call? Or just answer? The instruction: \"You can use one MCP tool: dice.roll(sides:int). Use it whenever the user asks to roll dice.\" This indicates for the tool usage. But the final output to the user should be only the number. After obtaining it from the tool, we will include just the number in a normal message formatted as it should be displayed to the user.\n\nSo we produce:\n\nAssistant sends tool request:\n\n```\n{\n \"name\": \"dice.roll\",\n \"arguments\": { \"sides\": 6 }\n}\n```\n\nThen tool returns something like {\"result\": 4}.\n\nAssistant then responds: \"4\"\n\nYes.\n\nWill the test harness expect any extra text? Usually the final assistant message should be \"4\" only. This is the correct.\n\nThus we do that.",
"type": "reasoning_text"
}
],
"encrypted_content": null,
"status": null
},
{
"arguments": "{\"sides\":6}",
"call_id": "call_ded484d77d1344e696d33be785a8031a",
"name": "roll",
"type": "function_call",
"id": "ft_ded484d77d1344e696d33be785a8031a",
"status": null
}
],
"parallel_tool_calls": true,
"temperature": 1.0,
"tool_choice": "auto",
"tools": [],
"top_p": 1.0,
"background": false,
"max_output_tokens": 16272,
"max_tool_calls": null,
"previous_response_id": null,
"prompt": null,
"reasoning": {
"effort": "high",
"generate_summary": null,
"summary": null
},
"service_tier": "auto",
"status": "completed",
"text": null,
"top_logprobs": 0,
"truncation": "disabled",
"usage": {
"input_tokens": 0,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 0,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 0
},
"user": null
}
Any ideas? I'm kinda stucked
| 2025-09-04T21:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n8m4qs/builtin_tools_with_vllm_gptoss/ | IAmReallyOk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8m4qs | false | null | t3_1n8m4qs | /r/LocalLLaMA/comments/1n8m4qs/builtin_tools_with_vllm_gptoss/ | false | false | self | 3 | null |
Need help figuring out a setup for my use case | 0 | I have a friend that had a tragedy happen. Her kid unalived themselves and she doesnt know why. She has all of her stuff and has went through everything, but her mental context limit cant take all of it analyze it to figure out why. She really wants answers, but understands that she may never get them. She just wants to do everything she can to find them.
She is familiar with LLMs like chatgpt and others, but doesnt want to upload all of their kids data into those companies servers.
We've talked about a lot and I've told her that an llm may mislead her, hallucinate, or be completely wrong and she knows that, but she thinks it might atleast help her see atleast 1 thing she missed.
I have a PC with a 1660 ti 6gb vram and I dont think that will be enough. Ive seen places like vast AI and Runpod, but im not really sure how to set all of that up or if the data is still local and safe. Ive ran webui and other programs that let you use local llms before so I know the basics atleast.
I need help with how I should set this up and the cheapest way to do it with renting a gpu. I also need help finding the best model for this. I would need image upload and if video upload is possible that would be great too.
Thank you! I would really love to help her. | 2025-09-04T21:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n8m20x/need_help_figuring_out_a_setup_for_my_use_case/ | KeyMillion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8m20x | false | null | t3_1n8m20x | /r/LocalLLaMA/comments/1n8m20x/need_help_figuring_out_a_setup_for_my_use_case/ | false | false | self | 0 | null |
DeepSeek is everybody... | 0 | Apparently DeepSeek has not a single clue who it is... The "specifically Claude 2.5.." got me. | 2025-09-04T21:00:03 | https://www.reddit.com/gallery/1n8m20b | GenLabsAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n8m20b | false | null | t3_1n8m20b | /r/LocalLLaMA/comments/1n8m20b/deepseek_is_everybody/ | false | false | 0 | null | |
Advice a beginner please! | 0 | I am a noob so please do not judge me. I am a teen and my budget is kinda limited and that why I am asking.
I love tinkering with servers and I wonder if it is worth it buying an AI server to run a local model.
Privacy, yes I know. But what about the performance? Is a LLAMA 70B as good as GPT5? What are the hardware requirement for that? Does it matter a lot if I go with a bit smaller version in terms of respons quality?
I have seen people buying 3 RTX3090 to get 72GB VRAM and that is why the used RTX3090 is faaar more expensive then a brand new RTX5070 locally.
If it most about the VRAM, could I go with 2x Arc A770 16GB? 3060 12GB? Would that be enough for a good model?
Why can not the model just use just the RAM instead? Is it that much slower or am I missing something here?
What about the cpu rekommendations? I rarely see anyone talking about it.
I rally appreciate any rekommendations and advice here! | 2025-09-04T20:57:54 | https://www.reddit.com/r/LocalLLaMA/comments/1n8m01t/advice_a_beginner_please/ | SailAway1798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8m01t | false | null | t3_1n8m01t | /r/LocalLLaMA/comments/1n8m01t/advice_a_beginner_please/ | false | false | self | 0 | null |
Claude github integration console vs Claude.ai? | 1 | If you use /install-guthub-app you seem to end up with a github integration that links to Claude.ai in github. But for api use/business Accounts console.claude seems to be the right place. How do you set up the github integration to work via api key or through console.claude so it used the business accounts? | 2025-09-04T20:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n8li00/claude_github_integration_console_vs_claudeai/ | Jentano | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8li00 | false | null | t3_1n8li00 | /r/LocalLLaMA/comments/1n8li00/claude_github_integration_console_vs_claudeai/ | false | false | self | 1 | null |
What’s the largest model you’ve managed to run on a Raspberry Pi 5 (8GB)? | 1 | I recently got Gemma 2B (GGUF) running locally with Ollama on a Raspberry Pi 5 (4GB), and it worked surprisingly well for short, context-aware outputs. Now I’ve upgraded to the 8GB model and I’m curious:
👉 Has anyone managed to run something bigger — like 3B or even a quantized 7B — and still get usable performance?
I'm using this setup in a side project that generates motivational phrases for an e-paper dashboard based on Strava and Garmin data. The model doesn't need to be chatty — just efficient and emotionally coherent.
For context (if you're curious):
🦊 https://www.hackster.io/rsappia/e-paper-dashboard-where-sport-ai-and-paper-meet-10c0f0
Would love to hear your experiences with model size, performance, and any recommendations! | 2025-09-04T20:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/1n8kx5u/whats_the_largest_model_youve_managed_to_run_on_a/ | Ricardo_Sappia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8kx5u | false | null | t3_1n8kx5u | /r/LocalLLaMA/comments/1n8kx5u/whats_the_largest_model_youve_managed_to_run_on_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?width=108&crop=smart&auto=webp&s=b9af4e7a7a4336d3f1f18d6ac810868817296ba9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?width=216&crop=smart&auto=webp&s=0279bbc8907d9c918829ec7293119189be79b368', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?width=320&crop=smart&auto=webp&s=96b54e005855d68871c5671110d3363183381a67', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?width=640&crop=smart&auto=webp&s=18e747145c8243427a43a3c3e63412e844ca19f0', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?width=960&crop=smart&auto=webp&s=e1d949a1d61e8eff2d2928be8713d232165b2cad', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?width=1080&crop=smart&auto=webp&s=6cee15e8a9cbab1457a697f1bffe4bdc6970efaa', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/VpFOswivD_QTF2xDQwsKHiabBV63DS_AOQAkuWYvUrU.jpeg?auto=webp&s=8325599cbf325370714335e6709bcd01ab980cd0', 'width': 1600}, 'variants': {}}]} |
An Easy Way to Copy Human Reasoning | 3 | Hey everyone, I recently published an article (May 26, 2025) titled **“An Easy Way to Copy Human Reasoning”**, where I explore how combining techniques like **latent variable modeling**, **chain-of-thought (CoT)**, **supervised fine-tuning**, **reinforcement learning**, and **knowledge distillation** can empower large language models to better emulate human reasoning processes.
In the post, I break down:
* How introducing a latent variable *z* lets models explicitly represent intermediate reasoning steps and marginalize over multiple reasoning paths to improve answer correctness.
* The role of CoT and how guiding models with thoughtful prompts like *“let’s think step by step”* or structured training data helps uncover their internal reasoning traces.
* How SFT objectives can be enhanced by marginalizing over latent reasoning chains, acknowledging multiple valid solution paths.
* Reinforcement learning strategies that self-improve reasoning by generating and validating reasoning traces, especially in STEM domains with automated scoring tools.
* The future potential of extending these approaches into environments like legal reasoning, healthcare, open-world games, and how online learning via test-time scaling might push generalizable reasoning.
If you're interested in:
* Making LLMs more interpretable via reasoning paths
* Bridging symbolic and statistical reasoning with latent variables
* Advancing reasoning capabilities beyond STEM tasks
…feel free to check it out—would love to hear your thoughts or spar on ideas!
Link:https://x.com/LuozhuZhang/status/1926955069083107728 | 2025-09-04T20:11:55 | https://www.reddit.com/r/LocalLLaMA/comments/1n8ktgf/an_easy_way_to_copy_human_reasoning/ | LuozhuZhang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8ktgf | false | null | t3_1n8ktgf | /r/LocalLLaMA/comments/1n8ktgf/an_easy_way_to_copy_human_reasoning/ | false | false | self | 3 | null |
I'm so curious about qwen3's top_k requirements. What is the safest threshold to push it? | 7 | Qwen3 is powerful, especially the 30b-a3b model, but I've always been so curious about why it is recommended to a `top_k` of 20. Regardless, its great at chatting and following instructions, but 20 leads to chat slop. Raising it to 40 shows better results.
So in order to minimize slop while maintaining quality using `top_k` alone for this model in particular, how high can I push it before it starts diminishing in quality?
Also, why `top_k` 20? Was alibaba eyeing for precision, or does this have to do with maintaining the precision of small models for complex tasks? | 2025-09-04T20:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/1n8kpof/im_so_curious_about_qwen3s_top_k_requirements/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8kpof | false | null | t3_1n8kpof | /r/LocalLLaMA/comments/1n8kpof/im_so_curious_about_qwen3s_top_k_requirements/ | false | false | self | 7 | null |
New AI Dungeon Models: Wayfarer 2 12B & Nova 70B | 114 | 2025-09-04T20:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n8kk48/new_ai_dungeon_models_wayfarer_2_12b_nova_70b/ | NottKolby | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8kk48 | false | null | t3_1n8kk48 | /r/LocalLLaMA/comments/1n8kk48/new_ai_dungeon_models_wayfarer_2_12b_nova_70b/ | false | false | 114 | {'enabled': False, 'images': [{'id': 'YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?width=108&crop=smart&auto=webp&s=1ed55b8bf9459c056967c9294a29a2582919581f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?width=216&crop=smart&auto=webp&s=1931038a231d688532ac663fa28ff452030e097d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?width=320&crop=smart&auto=webp&s=6d5c235603513e0dc73bfffc38a2aa1137bc3e8d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?width=640&crop=smart&auto=webp&s=3798d8af683fa3a9a0d957b91f08d8b015b64c2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?width=960&crop=smart&auto=webp&s=1305daa8f046f203f1c8fd57b6701e495e7e27ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?width=1080&crop=smart&auto=webp&s=672be7e61283843d12fdc8cb4558df3f8afd46ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YTw5l9Vh8yq-jd3sLHEgGG1W0jXu67lGtXgcd-NGSsQ.png?auto=webp&s=becda714c5c7c83ec5915fb6ebcdf0eb6d2bda9d', 'width': 1200}, 'variants': {}}]} | ||
[SWE-rebench] GLM-4.5 & Qwen3-Coder right behind Sonnet/GPT-5 on fresh GitHub tasks | 215 | Hi all, I’m Ibragim from Nebius.
We benchmarked **52 fresh GitHub PR tasks from August 2025** on the[ SWE-rebench](http://swe-rebench.com) leaderboard. These are real, recent problems (no train leakage). We ran both proprietary and open-source models.
**Quick takeaways:**
1. Top = **Sonnet 4 and GPT-5:** on the August slice there is no statistically significant gap between them.
2. Very close: **GLM-4.5 and Qwen3-Coder-480B.** Results are strong — **open source looks great** here!
3. **Grok Code Fast 1** is \~similar to **o3** in quality, but about **20×** cheaper (\~$0.05 per task).
Please check the[ leaderboard ](https://swe-rebench.com/)itself — 30+ models there, including **gpt-oss-20b**, **Qwen3-Coder-30B-A3B-Instruct**, **GLM-4.5-Air**, etc. Also you can click Inspect to see each of the **52 tasks from 51 repos**. And we added price per instance!
**P.S.** If you would like us to add more models, or if you notice any questionable tasks, please write in the comments. After our previous post, we received a lot of feedback and updated the leaderboard based on that. | 2025-09-04T19:55:29 | Fabulous_Pollution10 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n8kdxi | false | null | t3_1n8kdxi | /r/LocalLLaMA/comments/1n8kdxi/swerebench_glm45_qwen3coder_right_behind/ | false | false | default | 215 | {'enabled': True, 'images': [{'id': '7xidzcpxc7nf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/7xidzcpxc7nf1.png?width=108&crop=smart&auto=webp&s=7928ea4d253318f31d04c1daf3357987e1d87c65', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/7xidzcpxc7nf1.png?width=216&crop=smart&auto=webp&s=927f2e0ab7b470b2c9398c2a209922e0b512ce90', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/7xidzcpxc7nf1.png?width=320&crop=smart&auto=webp&s=79cd80df2e0be0bb71bddc23d7e3cb63e5151b14', 'width': 320}, {'height': 352, 'url': 'https://preview.redd.it/7xidzcpxc7nf1.png?width=640&crop=smart&auto=webp&s=a6b3daa90d295013ce73775054ed5a8a3110c2eb', 'width': 640}], 'source': {'height': 490, 'url': 'https://preview.redd.it/7xidzcpxc7nf1.png?auto=webp&s=ec2a842c7ec1d81c74a850e660d13c5fa417a3b8', 'width': 889}, 'variants': {}}]} | |
Adding another GPU to pair with 4090? | 2 | I currently have a gaming PC with 5950x, 32gb DDR4 and an RTX 4090. I play with local LLMs as a hobby mostly, as I am fascinated by how the gap is closing between SOTA and what can be run on a gaming GPU. It does not make sense for me to invest in a dedicated AI server or similar, but it would be interesting to be able to a bit larger models than I currently can.
A few questions:
1. Does it work well when you mix different GPUs for AI usage? E.g. say I added an RTX 3090 to the mix, will I basically be operating at the lowest common denominator, or is it worthwhile?
2. Will I need more system RAM, I am still unclear about how many tools support loading directly to VRAM.
3. (bonus question) Can i disable one GPU easily when not doing AI to reduce power consumption and ensure x16 for the RTX 4090 when gaming? | 2025-09-04T19:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n8kcck/adding_another_gpu_to_pair_with_4090/ | ziphnor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8kcck | false | null | t3_1n8kcck | /r/LocalLLaMA/comments/1n8kcck/adding_another_gpu_to_pair_with_4090/ | false | false | self | 2 | null |
What's the best way to get the most out of LLMs for "vibe coding"? | 7 | After hours of going back and forth with ChatGPT(happy to take alternative suggestions)to help assemble a script to bulk process PDFs into text, extract metadata, and export as JSON for RAG. It look a while to get ChatGPT to output exactly the script needed without forgetting to include things.
I imagine with a concise prompt that detailed everything needed(features, tools to use, etc) is the probably the best way to get the output I want without having to go back and forth for hours. The script itself is not that long, so I'd assume the issue could be the length of our entire conversation blowing out the context window, which results in the LLM "forgetting" to add certain bits of code.
| 2025-09-04T19:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n8kadm/whats_the_best_way_to_get_the_most_out_of_llms/ | Asthenia5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n8kadm | false | null | t3_1n8kadm | /r/LocalLLaMA/comments/1n8kadm/whats_the_best_way_to_get_the_most_out_of_llms/ | false | false | self | 7 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.