title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Hardware Requirement for AI/ML
1
Hi, I’m studying software engineering in college and finishing all my lower division classes (mostly not directly related to the major) in this semester. And AI/ML seems like interesting and I want to specialize in AI/ML or maybe just direct myself into it. Anyways, I was thinking to buy a laptop with 4070 and 16gb ram but more I do research on it, more confused I’m. Because, some saying 32gb ram is necessary but some saying 16gb ram is fine (I even saw person in reddit works with 8gb ram). Making decision is so though for me at this point. Could guys help me? What wanna buy is intel u9 185h, rtx 4070 and 16gb ram or should I get i9-14900hx, rtx 4080 and 32gb. Both has identical price but the one with rtx 4070 and 16gb is slim built that’s I want but the other one is so thick and can be heavy thats why I dont want it in my college and either in daily life. Also, I’m thinking not to change the laptop for next 4-5 years. Thanks you guys!
2025-11-17T10:17:40
https://www.reddit.com/r/LocalLLaMA/comments/1ozcnxj/hardware_requirement_for_aiml/
Sad-Concentrate8364
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozcnxj
false
null
t3_1ozcnxj
/r/LocalLLaMA/comments/1ozcnxj/hardware_requirement_for_aiml/
false
false
self
1
null
which models were pretty good at visual reasoning??
2
i dont know wheather visual reasoning would be better metric at this or not but yeah what im trying to build is something which generates node based flows using react flow and meramid from prompts as of now using sonnet 4.5...thing is i need to generate the location too so when its complex flow its not that good so any other models which was good at this tasks?
2025-11-17T10:14:48
https://www.reddit.com/r/LocalLLaMA/comments/1ozcmc2/which_models_were_pretty_good_at_visual_reasoning/
lavangamm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozcmc2
false
null
t3_1ozcmc2
/r/LocalLLaMA/comments/1ozcmc2/which_models_were_pretty_good_at_visual_reasoning/
false
false
self
2
null
MemLayer, a Python package that gives local LLMs persistent long-term memory (open-source)
232
# What Memlayer Does MemLayer is an open-source **Python package** that adds persistent, long-term memory to **local LLMs** and embedding pipelines. Local models are powerful, but they’re stateless. Every prompt starts from zero. This makes it difficult to build assistants or agents that remember anything from one interaction to the next. MemLayer provides a lightweight memory layer that works entirely **offline**: * captures key information from conversations * stores it persistently using **local** vector + graph memory * retrieves relevant context automatically on future calls * works with any local embedding model (BGE, Instructor, SentenceTransformers, etc.) * does not require OpenAI / cloud APIs The workflow: you send a message → MemLayer saves what matters → later, when you ask something related, the local model answers correctly because the memory layer retrieved the earlier information. Everything happens locally. No servers, no internet, no external dependencies. # Target Audience MemLayer is perfect for: * Users building offline LLM apps or assistants * Developers who want **persistent recall** across sessions * People running **GGUF models**, **local embeddings**, or **on-device inference** * Anyone who wants a memory system without maintaining vector databases or cloud infra * Researchers exploring long-term memory architectures for local models It’s lightweight, works with CPU or GPU, and requires no online services. # Comparison With Existing Alternatives Some frameworks include memory components, but MemLayer differs in key ways: * **Local-first:** Designed to run with offline LLMs and embedding models. * **Pure Python + open-source:** Easy to inspect, modify, or extend. * **Structured memory:** Combines semantic vector recall with optional graph memory. * **Noise-aware:** Includes an optional ML-based “is this worth saving?” gate to avoid storing junk. * **Infrastructure-free:** No cloud APIs, storage is all local files. The goal is to offer a memory layer you can drop into any **local LLM workflow** without adopting a large framework or setting up servers. If anyone has feedback, ideas, or wants to try it with their own local models, I’d love to hear it. **GitHub:** [https://github.com/divagr18/memlayer](https://github.com/divagr18/memlayer) **PyPI:** `pip install memlayer`
2025-11-17T09:34:08
https://www.reddit.com/r/LocalLLaMA/comments/1ozbzpx/memlayer_a_python_package_that_gives_local_llms/
MoreMouseBites
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbzpx
false
null
t3_1ozbzpx
/r/LocalLLaMA/comments/1ozbzpx/memlayer_a_python_package_that_gives_local_llms/
false
false
self
232
null
Free GPU in VS Code (Google Colab x VS Code)
21
Google Colab has now got an extension in VS Code and hence, you can use the free T4 GPU in VS Code directly from local system. Demo--> [https://youtu.be/sTlVTwkQPV4](https://youtu.be/sTlVTwkQPV4)
2025-11-17T09:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1ozbv38/free_gpu_in_vs_code_google_colab_x_vs_code/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbv38
false
null
t3_1ozbv38
/r/LocalLLaMA/comments/1ozbv38/free_gpu_in_vs_code_google_colab_x_vs_code/
false
false
self
21
{'enabled': False, 'images': [{'id': 'qmPpJNZ122hdnbLudI5pid3FgVulEBoLl3GJMb0nkKU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qmPpJNZ122hdnbLudI5pid3FgVulEBoLl3GJMb0nkKU.jpeg?width=108&crop=smart&auto=webp&s=2d165765654baa46a007b1557e15ca43b0da9ab9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qmPpJNZ122hdnbLudI5pid3FgVulEBoLl3GJMb0nkKU.jpeg?width=216&crop=smart&auto=webp&s=0074941d6ff4203a36868de50bfdfb4587da5e49', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qmPpJNZ122hdnbLudI5pid3FgVulEBoLl3GJMb0nkKU.jpeg?width=320&crop=smart&auto=webp&s=2dbe490f99113b4ecceaf624f1acd9cea7154a84', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qmPpJNZ122hdnbLudI5pid3FgVulEBoLl3GJMb0nkKU.jpeg?auto=webp&s=5e1fd19a6fbc83548d9e3c62fbc3217f0fca169d', 'width': 480}, 'variants': {}}]}
BSD MAC LLM UI: Minimal, Auditable LLM Front End for Secure Environments
6
**BSD MAC LLM UI** is a compact, security-focused chat interface built in C with lean design principles and released under the BSD 3-Clause license. It offers a no-JavaScript HTML/CSS web UI or optional GTK/Qt GUI, routing prompts either to an OpenAI-compatible API or running fully offline via TensorRT-LLM - ideal for isolated and hardened environments such as OpenBSD, Linux, OpenXT, or Qubes OS. The talk by **Arthur Rasmusson** presents its single-binary architecture with stateless form posts, strict timeouts, and kernel sandboxing through pledge and seccomp. Example deployments include localhost, WireGuard, and Tor hidden services. Developers gain a reproducible template for building low-overhead, auditable LLM interfaces fit for air-gapped or compliance-driven systems. More details: **🔗 Video, description & slides:** [https://cfp.3mdeb.com/developers-vpub-0x10-2025/talk/UHJWWW/](https://cfp.3mdeb.com/developers-vpub-0x10-2025/talk/UHJWWW/)
2025-11-17T09:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1ozbswk/bsd_mac_llm_ui_minimal_auditable_llm_front_end/
3mdeb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbswk
false
null
t3_1ozbswk
/r/LocalLLaMA/comments/1ozbswk/bsd_mac_llm_ui_minimal_auditable_llm_front_end/
false
false
self
6
null
Local small model for math validation
0
Hi guys, I used to have a GPT+ license my son used for checks/validation/solution explation of his mathemartics exercises (first academic year). I currentl have no longer have such license. For such usage - and as he is smart using it the right way - I might consider taking a new one. Though , I have a laptop with a 4090 video card (so laptop version ...) + 32Gb RAM and was wondering whether there would be a "small" multi-modal I could run locally with such configuration for this problem. Also for curiosity \^\^ Multimodal as we shoudl be able to upload images / screenshots of exercices. Note that for this step, I am quite sure I could find an OCR solution turning equations into LateX. Thanks for any suggestion! (and once again, mostly curiosity: paying a license for OpenAI GPT ; or any other provider you might recommand is a possibility)
2025-11-17T09:15:38
https://www.reddit.com/r/LocalLLaMA/comments/1ozbps0/local_small_model_for_math_validation/
ericlecoutre
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbps0
false
null
t3_1ozbps0
/r/LocalLLaMA/comments/1ozbps0/local_small_model_for_math_validation/
false
false
self
0
null
Text-to-image
1
Hey, guys I wondering what is lights text-to-image model in terms of Vram. I need the lightest one possible.
2025-11-17T08:55:54
https://www.reddit.com/r/LocalLLaMA/comments/1ozbezx/texttoimage/
Lazy_Mycologist_8214
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbezx
false
null
t3_1ozbezx
/r/LocalLLaMA/comments/1ozbezx/texttoimage/
false
false
self
1
null
Nvidia DGX Spark (or alike) vs dual RTX 3090
7
What are your opinions on getting the one or the other for professional work. Let's assume you can build a RTX based machine, or have one. Does the increase of HBA RAM to 128GB in the Spark justifies the price. By professional work i mostly mean using coder models (Qwen-coder) for coding assitance or general models like Nemotron, Qwen, Deepseek etc but larger than 72b to work on confidential or internal company data.
2025-11-17T08:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1ozbe8q/nvidia_dgx_spark_or_alike_vs_dual_rtx_3090/
ChopSticksPlease
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbe8q
false
null
t3_1ozbe8q
/r/LocalLLaMA/comments/1ozbe8q/nvidia_dgx_spark_or_alike_vs_dual_rtx_3090/
false
false
self
7
null
memory issues with the attention mechanism
0
Hello everyone. I’ve finished my first fine-tuning, and today I wanted to test it, but I’m running into problems with memory allocation (24GB VRAM). Let me explain the issue. I fine-tuned a LLaMA 3.1 8B Instruct model. The use case is text-to-SQL, which requires putting the database schema in the system prompt. I’m not passing the full schema, but the two most relevant tables + the column descriptions + 15/20 examples for the cardinal columns. This results in a system prompt of about 25k tokens. During inference, this makes the attention mechanism weights explode to absurd values, and the memory is not enough. I’ve already run models of this size with this system prompt using Ollama and never had memory problems. I need to understand what direction to take and what elements or solutions exist to optimize GPU usage. The first thing I thought of is reducing the byte size of the weights with this configuration: model = AutoModelForCausalLM.from_pretrained( base_model, device_map="auto", torch_dtype=torch.bfloat16, load_in_4bit=True ) This is the first fine-tuning I’ve ever done, so I’d like to understand how this kind of problem is typically handled. Even just some pointers on what to study would be helpful.
2025-11-17T08:52:51
https://www.reddit.com/r/LocalLLaMA/comments/1ozbdd3/memory_issues_with_the_attention_mechanism/
Juno9419
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozbdd3
false
null
t3_1ozbdd3
/r/LocalLLaMA/comments/1ozbdd3/memory_issues_with_the_attention_mechanism/
false
false
self
0
null
Vibe coded a llamacpp server launcher
0
This is a Windows batch script that automatically loads your models from a given directory and starts the llama-server. # Features * Automatically loads your gguf models. * Dynamically detects mmproj files for vision models. * Allows configuration for: GPU layers, arguments and more. * Can be run from anywhere in your pc. * Saves your config. # How to start 1. Get the script from here [https://gist.github.com/Iq1pl/2aa339db9e1c9c9bd79ee06c2aff6cb3](https://gist.github.com/Iq1pl/2aa339db9e1c9c9bd79ee06c2aff6cb3). 2. Edit `set "LLAMA_CPP_PATH="` and `set "MODELS_BASE_PATH="` at the top of the script with your own paths. Example: `set "LLAMA_CPP_PATH=C:\user\llama.cpp"` 3. Save the file as Run-llama-server.bat and double click it to run. 4. Type c to configure the script to your needs. 5. Choose the model by typing it's number and start the server. Default address is "http://127.0.0.1:8080".
2025-11-17T08:42:21
https://i.redd.it/5dbem24uyr1g1.png
Iq1pl
i.redd.it
1970-01-01T00:00:00
0
{}
1ozb7vr
false
null
t3_1ozb7vr
/r/LocalLLaMA/comments/1ozb7vr/vibe_coded_a_llamacpp_server_launcher/
false
false
https://a.thumbs.redditm…R-WNX90GGMv8.jpg
0
{'enabled': True, 'images': [{'id': '1ifHDTctQWhQ9-UMdMAtIx-f2qRTz-8u2-QPdvwyWg8', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?width=108&crop=smart&auto=webp&s=b0e8a0f365795ebb895eec8dbf286069bc692eeb', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?width=216&crop=smart&auto=webp&s=a9fa95549722f570facf46cdb8208e3389afac64', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?width=320&crop=smart&auto=webp&s=6198b50dc9fa4c462d02a687b4888d3192190dc6', 'width': 320}, {'height': 380, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?width=640&crop=smart&auto=webp&s=9e3d5bc93e5846d5345b0de6be91d8595ce5e954', 'width': 640}, {'height': 570, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?width=960&crop=smart&auto=webp&s=633516169630652083a96eb90c41dd2190dc9657', 'width': 960}, {'height': 641, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?width=1080&crop=smart&auto=webp&s=ba21b673b96cb6c6dab05979bb85b2ecaddd8c84', 'width': 1080}], 'source': {'height': 1140, 'url': 'https://preview.redd.it/5dbem24uyr1g1.png?auto=webp&s=d92f9bac30e66f0d97399c83d33d6c8622bf7c09', 'width': 1920}, 'variants': {}}]}
Human-like conversations, bias and token length?
1
Since the beginning of my AI journey a few years ago I have had the idea of recreating myself. Cloning my looks and my voice was easy enough (even if I've only managed to get the voice in "realtime"), but I have yet to find a way to accurately capture human-like conversations and personality. I've looked at the Mistral and Dolphin models, which historically are highly ranked when it comes to human-like interaction and while they can be very good they are still lacking "something". Also... Since I don't know what data to use and how to finetune a model/lora(?) on a personality I'm kinda stuck.. What are the best models for human like conversation today? My dream is to "clone" myself digitally, so I can talk to myself and find out how other people might perceive me 😊 What is it like being my friend or trying to have a normal conversation with me 😬 Maybe even create something my kids can talk to when I'm gone?! Yeah I know. A little morbid, but as an experiment it would be soooo cool!! Training a model on strong opinions would also be interesting. I'm for example an atheist. Models today have no real opinions or bias when it comes to religion and take no direct side. I want to be able to train a model to do that. I imagine all this will easily be possible in the future of course, but how far have we come NOW? I've tried creating personalities with json-files for Oobabooga, but they are way to shallow and the token length of the discussions quickly runs out, lobotomizing the discussion. Any suggestions helping me in the right direction would be greatly appreciated!
2025-11-17T08:11:01
https://www.reddit.com/r/LocalLLaMA/comments/1ozaqu0/humanlike_conversations_bias_and_token_length/
VirusCharacter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozaqu0
false
null
t3_1ozaqu0
/r/LocalLLaMA/comments/1ozaqu0/humanlike_conversations_bias_and_token_length/
false
false
self
1
null
Software recommendations
0
What are some of the best repos/tools people are using to interact with local LLMs (outside of the usual Ollama, LM Studio)? What's your stack? What are some success stories for ways you've managed to integrate it into your daily workflows? What are some exciting projects under development? Let's hear it all!
2025-11-17T07:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1ozahfe/software_recommendations/
1H4rsh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ozahfe
false
null
t3_1ozahfe
/r/LocalLLaMA/comments/1ozahfe/software_recommendations/
false
false
self
0
null
Lm playground
0
I just spent some time to make a website/app It host board games and card games for now It connects to llms via lmstudio ollama or api It has the rules of each game in the corner with a log and chat for llms What types of models should I test this with before I make it public idk if anyone is interested in something like this but thought it would be cool seeing Google new sima2 play video games
2025-11-17T07:28:07
https://i.redd.it/n5xlzljyrr1g1.jpeg
AceCustom1
i.redd.it
1970-01-01T00:00:00
0
{}
1oza32g
false
null
t3_1oza32g
/r/LocalLLaMA/comments/1oza32g/lm_playground/
false
false
default
0
{'enabled': True, 'images': [{'id': 'n5xlzljyrr1g1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?width=108&crop=smart&auto=webp&s=f9b8f87078669fd995dafce1d67bc12836913562', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?width=216&crop=smart&auto=webp&s=3989dc9430c812a58d08ce1219c568204de786da', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?width=320&crop=smart&auto=webp&s=9887855c4728b2ef2b618aebf244664cbbd5c66b', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?width=640&crop=smart&auto=webp&s=c2c4d2aa45ab97b5da83f29d35f08bdf1fb80c24', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?width=960&crop=smart&auto=webp&s=6eca81d778c59eca1f6d4ffd9d0158483f810b26', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?width=1080&crop=smart&auto=webp&s=45c132ebe6690daad2f0fcbfaaf527e747092e01', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/n5xlzljyrr1g1.jpeg?auto=webp&s=da88a11fc69826c21a6a754ca8891b6f7be34dce', 'width': 3024}, 'variants': {}}]}
Apple is considering putting miniHBM on iPhones in 2027
142
This news was reported on Macrumor, Apple Insider. If Apple puts minihbm on the iphone, then macs will also have minihbm soon… Crazy bandwidths are coming, I hope HBM comes to macs before the iphone!
2025-11-17T07:15:05
https://www.reddit.com/r/LocalLLaMA/comments/1oz9vs3/apple_is_considering_putting_minihbm_on_iphones/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz9vs3
false
null
t3_1oz9vs3
/r/LocalLLaMA/comments/1oz9vs3/apple_is_considering_putting_minihbm_on_iphones/
false
false
self
142
{'enabled': False, 'images': [{'id': '31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?width=108&crop=smart&auto=webp&s=5128b81245d2b02e8071c42828307222eba699f1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?width=216&crop=smart&auto=webp&s=61cef71fa55c5333ac21458bedf7e92845a2db90', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?width=320&crop=smart&auto=webp&s=15b9eb0f7698fd4f3911f8bc5fb21284f7b64114', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?width=640&crop=smart&auto=webp&s=0e3a2bf426ddaa3d07d1281009eef0cc0d5cc8ee', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?width=960&crop=smart&auto=webp&s=d182a21ca533cb0e624bcf7fe98b4eb3919bde3e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?width=1080&crop=smart&auto=webp&s=5a4c93c8b61da1b99ca85060fa012bec7543f532', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/31jbMIT-zHUzs6quUA9j2q29Vo962IDs2BkWQxaNxjU.jpeg?auto=webp&s=82f2bc4375f5e0dce8c8310d5549043c7dfad977', 'width': 2400}, 'variants': {}}]}
[DEV] Looking for AI agent builders to test a privacy-safe ad layer
0
Hey folks — I’m building NativeLayer, a lightweight, privacy-first monetization layer for LLM apps and agents. If you’re working on AI tools and want a real revenue option without tracking users, I’d love your feedback. What it does • Adds context-aware sponsored responses into agent/chat flows • Zero tracking (GDPR/CCPA friendly) • Works with local + cloud models • Simple SDK/API (JS/Python, MCP compatible) For devs • 70% revenue share • <10-minute integration • Great for indie tools, chatbots, multi-agent systems Links Demo: https://demo.nativelayer.ai More info: https://nativelayer.ai Screenshots attached — happy to answer technical questions or help you test it out.
2025-11-17T07:08:21
https://www.reddit.com/gallery/1oz9rwo
Acrobatic-Repair-837
reddit.com
1970-01-01T00:00:00
0
{}
1oz9rwo
false
null
t3_1oz9rwo
/r/LocalLLaMA/comments/1oz9rwo/dev_looking_for_ai_agent_builders_to_test_a/
false
false
https://a.thumbs.redditm…UZCc22jKGf98.jpg
0
null
[DEV] Looking for Local LLaMA builders to help test a privacy-safe ad layer
1
Hey folks — I’m working on NativeLayer, a lightweight, privacy-first monetization layer for LLM apps and local agent setups. If you’re building local models, agentic workflows, custom frontends, or side-projects, and want a way to generate revenue without tracking users, I’d love your feedback. What it does • Injects context-aware sponsored responses into agent/chat flows • No user tracking, no analytics, no fingerprinting • Works with local models + cloud LLMs • Drop-in SDK/API (JS, Python; MCP compatible) Why you might care • Many of us are shipping AI tools with zero monetization options • Traditional ad networks don’t work for stateful chat or local models • Privacy is a must — especially for offline/local-first apps For developers • 70% revenue share • Simple integration (usually <10 min) • Works even if you run everything locally • Perfect for indie tools, agents, and OSS experiments Links Demo: https://demo.nativelayer.ai Info: https://nativelayer.ai Contact: hello@nativelayer.ai Happy to answer implementation questions, show code snippets, or walk through how it plugs into local LLaMA pipelines. Would love testers and honest feedback from this community.
2025-11-17T07:03:18
https://www.reddit.com/r/LocalLLaMA/comments/1oz9p23/dev_looking_for_local_llama_builders_to_help_test/
Acrobatic-Repair-837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz9p23
false
null
t3_1oz9p23
/r/LocalLLaMA/comments/1oz9p23/dev_looking_for_local_llama_builders_to_help_test/
false
false
self
1
null
Best opensource model for speech to text and supports streaming
0
Which is the best open source model which supports streaming via websockets and has low latency for speech to text
2025-11-17T07:00:15
https://www.reddit.com/r/LocalLLaMA/comments/1oz9n3y/best_opensource_model_for_speech_to_text_and/
Otherwise-Top2335
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz9n3y
false
null
t3_1oz9n3y
/r/LocalLLaMA/comments/1oz9n3y/best_opensource_model_for_speech_to_text_and/
false
false
self
0
null
Open-World Game to Practice Interview Skills
0
Hi Everybody! I am just writing to share a project that I have been working on for the last few months called Yapper. Yapper is a free, light hearted, low-stakes way for healthcare students (or anyone who is interested quite frankly) to practice patient interview skills. The patients responses are driven by a tiny language model that I trained from scratch and this app runs completely on device. Here is a link to my [Github](https://github.com/02developer/yapper) if you are interested. I would appreciate any feedback that you have.
2025-11-17T07:00:05
https://v.redd.it/0iecjuvslr1g1
Apprehensive-Wish735
v.redd.it
1970-01-01T00:00:00
0
{}
1oz9mzs
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/0iecjuvslr1g1/DASHPlaylist.mpd?a=1765954819%2CNjVmM2M1MzJiN2I0MmEyZGVmODFiYTMyODUwM2M4NDljMDcxYWRjODMwYTRmYzFmY2I5YTA5ZTkyZWJjZjFlZA%3D%3D&v=1&f=sd', 'duration': 79, 'fallback_url': 'https://v.redd.it/0iecjuvslr1g1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/0iecjuvslr1g1/HLSPlaylist.m3u8?a=1765954819%2COWIzMDkyNWQ2NjM3ODBhMjljMGNmMDI5MWNkNjJhODk0NzkyOGUzYjg0M2I1MWM1MjlhYTdiYzNhMGJmMjg3YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0iecjuvslr1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
t3_1oz9mzs
/r/LocalLLaMA/comments/1oz9mzs/openworld_game_to_practice_interview_skills/
false
false
https://external-preview…9812bc2eef52e792
0
{'enabled': False, 'images': [{'id': 'dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC.png?width=108&crop=smart&format=pjpg&auto=webp&s=6f278fdb87ac250465a87e2cdf9e9d8f0c05eb8a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC.png?width=216&crop=smart&format=pjpg&auto=webp&s=6e006e57ca539edab94b973685824eab0b4cc610', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC.png?width=320&crop=smart&format=pjpg&auto=webp&s=f7c68eaab595f49ba4974ca26d1037d5274c7fd7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC.png?width=640&crop=smart&format=pjpg&auto=webp&s=56a225d18e7107a3655dcb9ee328ce2b276a3e63', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC.png?width=960&crop=smart&format=pjpg&auto=webp&s=8d64fd130c7b0963534a87fb1877c88c1b205347', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/dWF4Mmc5d3NscjFnMaVixxKd7zB_dNqOq_lzz69riEl9YNZ0oAJnjPLWVPtC.png?format=pjpg&auto=webp&s=c29de1c16fd446a04592049d29f6047248604f1d', 'width': 960}, 'variants': {}}]}
Options for working with sensitive data?
2
Hey all, Recently come up at work that we have to be careful about what type of data we put into online AI models, which is totally fair. I guess my my question is, for what I assume are everyday AI tasks like gathering insights on documents, calculations and programming, text generation and other simple tasks/automations, what is the absolute minimum of parameters one can get away with on a local model all the while keeping sensitive data purely local (if that is possible)? I'm trying to get an idea of what my hardware budget should be. My current machine could only comfortable run very small models and I'm broke asf lol.
2025-11-17T06:58:55
https://www.reddit.com/r/LocalLLaMA/comments/1oz9m9z/options_for_working_with_sensitive_data/
PsychologicalCup1672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz9m9z
false
null
t3_1oz9m9z
/r/LocalLLaMA/comments/1oz9m9z/options_for_working_with_sensitive_data/
false
false
self
2
null
Claude Sonnet 4.5 is still the best for me
0
For the last couple of months I had a company provided Claude API Key. I was using it with Cursor and it was amazing. It was very proactive and very deliberate in performing tasks for me. If I told it to update a function, clean up a file etc, it would proactively check any references to it and make sure everything still works. Well my company turned off our Claude Keys in favour of just using Copilot, though Im not a fan so I decided to keep using Cursor, but with more affordable models. Im trying out Kimi K2 Thinking and also GLM 4.6 but I am not too happy with it. When it makes big edits through a lot of files, there is just so many things wrong with it and I have to feed it errors one by one. With Sonnet it was just so much more autopilot. At first I thought it was just using it through Kilo Code, but it wasnt because I switched back to Cursor to use with GLM 4.6 and the same thing. Does anyone else have the same experience. If not is there a certain way you are using these cheaper models? Also would appreciate if you drop the models you are using if its not any of these 3!
2025-11-17T06:40:22
https://www.reddit.com/r/LocalLLaMA/comments/1oz9bqf/claude_sonnet_45_is_still_the_best_for_me/
OGXirvin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz9bqf
false
null
t3_1oz9bqf
/r/LocalLLaMA/comments/1oz9bqf/claude_sonnet_45_is_still_the_best_for_me/
false
false
self
0
null
Models for fiction stories and creative writing?
1
Hi all, Out of all models that are released this year, what models do you recommend for long form story writing? Has any model come close to Midnight Miqu? preferably in the 70B and above range. Thanks all!
2025-11-17T06:31:35
https://www.reddit.com/r/LocalLLaMA/comments/1oz96l2/models_for_fiction_stories_and_creative_writing/
morbidSuplex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz96l2
false
null
t3_1oz96l2
/r/LocalLLaMA/comments/1oz96l2/models_for_fiction_stories_and_creative_writing/
false
false
self
1
null
"AI safety" will be the end of us all
4
2025-11-17T06:30:51
https://i.redd.it/n8ltqm5nhr1g1.png
zhambe
i.redd.it
1970-01-01T00:00:00
0
{}
1oz964x
false
null
t3_1oz964x
/r/LocalLLaMA/comments/1oz964x/ai_safety_will_be_the_end_of_us_all/
false
false
default
4
{'enabled': True, 'images': [{'id': 'n8ltqm5nhr1g1', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?width=108&crop=smart&auto=webp&s=29c693747ada4b3a01fa13e205017140bd0c74f1', 'width': 108}, {'height': 48, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?width=216&crop=smart&auto=webp&s=14471e276b724759e5850b816581bbc629b3f68c', 'width': 216}, {'height': 72, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?width=320&crop=smart&auto=webp&s=8b851bc847e36fc6d47727d3fc7058b60cd6345d', 'width': 320}, {'height': 144, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?width=640&crop=smart&auto=webp&s=63a0535ac99e2c219c80605a66421cfde6968e3d', 'width': 640}, {'height': 217, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?width=960&crop=smart&auto=webp&s=bb5f3234965ed657f3ebe78dcd44a07ee4c6bf8e', 'width': 960}, {'height': 244, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?width=1080&crop=smart&auto=webp&s=7dd830b46527ba6825327f6e4f3ae19881448494', 'width': 1080}], 'source': {'height': 550, 'url': 'https://preview.redd.it/n8ltqm5nhr1g1.png?auto=webp&s=08b4f286e620f2e9895defd9868345b4547b5fea', 'width': 2430}, 'variants': {}}]}
RAG Paper 25.11.14
9
1. [Privacy Challenges and Solutions in Retrieval-Augmented Generation-Enhanced LLMs for Healthcare Chatbots: A Review of Applications, Risks, and Future Directions](http://arxiv.org/abs/2511.11347v1) 2. [CLARITY: Contextual Linguistic Adaptation and Accent Retrieval for Dual-Bias Mitigation in Text-to-Speech Generation](http://arxiv.org/abs/2511.11104v1) 3. [Multimodal Peer Review Simulation with Actionable To-Do Recommendations for Community-Aware Manuscript Revisions](http://arxiv.org/abs/2511.10902v1) 4. [Expert-Guided Prompting and Retrieval-Augmented Generation for Emergency Medical Service Question Answering](http://arxiv.org/abs/2511.10900v1) 5. [ICX360: In-Context eXplainability 360 Toolkit](http://arxiv.org/abs/2511.10879v1) **Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.**
2025-11-17T06:29:20
https://www.reddit.com/r/LocalLLaMA/comments/1oz9563/rag_paper_251114/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz9563
false
null
t3_1oz9563
/r/LocalLLaMA/comments/1oz9563/rag_paper_251114/
false
false
self
9
null
PyTorch 2.10.0a0 w/ Blackwell (sm_120) Support — Patched & Packaged for One-Command Install
14
If you’ve been frustrated with PyTorch not supporting **sm\_120** yet on the RTX 5080/5090 — I finally cracked it. I reverse-engineered the missing pieces, patched the arch detection, fixed dispatch fallbacks, and packaged everything into two wheels: pip install rtx-stone # Windows pip install stone-linux # Linux **What you get:** * True **sm\_120** acceleration * Full kernel support (no fallback to sm\_89) * Matrix multiply benchmarks hitting expected TFLOPS * Compatible with PyTorch 2.10 APIs * Local LLMs now work without hacks * Zero config required This is meant as a temporary bridge until official support lands in nightly. If you’re running a 5080 or 5090 and hit unsupported arch errors — this fixes it. Feedback, benchmarks, and testing are very welcome.
2025-11-17T06:16:14
https://www.reddit.com/r/LocalLLaMA/comments/1oz8x9i/pytorch_2100a0_w_blackwell_sm_120_support_patched/
Least-Barracuda-2793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz8x9i
false
null
t3_1oz8x9i
/r/LocalLLaMA/comments/1oz8x9i/pytorch_2100a0_w_blackwell_sm_120_support_patched/
false
false
self
14
null
TTS model for 4GB VRAM and 16 GB RAM ?
13
Should I try index TTS with these specs ? Also if anyone can give me tips on installing open models as my laptop is also used for my college
2025-11-17T05:48:30
https://www.reddit.com/r/LocalLLaMA/comments/1oz8fu5/tts_model_for_4gb_vram_and_16_gb_ram/
ProNoostr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz8fu5
false
null
t3_1oz8fu5
/r/LocalLLaMA/comments/1oz8fu5/tts_model_for_4gb_vram_and_16_gb_ram/
false
false
self
13
null
Local, bring your own TTS API, document reader web app (EPUB/PDF/TXT/MD)
31
Sharing my latest release of **OpenReader WebUI v1.0.0**, an open-source, local-first text-to-speech document reader and audiobook exporter. There are many new features and improvements. **What is OpenReader WebUI?** * A Next.js web app for reading and listening to EPUB, PDF, TXT, Markdown, and DOCX files. * Supports multiple TTS providers: OpenAI, Deepinfra, and self-hosted OpenAI-compatible APIs (like Kokoro-FastAPI, Orpheus-FastAPI). * Local-first: All your docs and settings are stored in-browser (IndexedDB/Dexie), with optional server-side doc storage. * Audiobook export: Generate and download audiobooks (m4b/mp3) with chapter metadata, using ffmpeg. **What’s New in v1.0.0?** * 🧠 **Smart Sentence-Aware Narration:** Smoother TTS by merging sentences that span pages/chapters (no more awkward pauses at page breaks!). * 🎧 **Reliable Audiobook Export:** Export audiobooks per-chapter/page, with resume & regeneration support. Download as m4b or mp3. * 🚀 **Optimized TTS Pipeline:** In-memory LRU audio cache, ETag-aware responses, and in-flight request de-duplication for snappy playback. * 📄 **Modernized PDF Highlighting:** Real-time, overlay-based text highlighting via Web Worker for responsive read-along. * 💾 **Dexie-backed Local Storage:** All docs and config stored locally, with live queries and server sync. * 🗣️ **Multi-Voice & Provider Support:** Deepinfra, OpenAI, Kokoro-FastAPI, Orpheus-FastAPI, and custom endpoints with `/v1/audio/voices`. **Why LocalLlama?** * You can self-host the TTS backend (Kokoro/Orpheus FastAPI) and run everything locally—no cloud required. * I made a post here around a year ago now, first showing of the early versions. About a year later and many things have been added, fixed, or improved. **Get Started:** * View a less featured demo @ [https://openreader.richardr.dev](https://openreader.richardr.dev) * Free access to Kokoro model on Deepinfra for a time * Demo is not full featured * Install full version: [GitHub – OpenReader-WebUI](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) **Would love your feedback, feature requests, or contributions!** Let me know what you think!
2025-11-17T05:15:20
https://v.redd.it/6wkde35m2r1g1
richardr1126
/r/LocalLLaMA/comments/1oz7upb/local_bring_your_own_tts_api_document_reader_web/
1970-01-01T00:00:00
0
{}
1oz7upb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6wkde35m2r1g1/DASHPlaylist.mpd?a=1766078126%2CNmZkYjE1YWY1MDhlNzQzNDUxMzhjZTA3ODM0Y2Q5ZWRiZmFhMGVjNmVkNTA0YTZmMTVkZmMxZjBhMzk0NGJiOA%3D%3D&v=1&f=sd', 'duration': 82, 'fallback_url': 'https://v.redd.it/6wkde35m2r1g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6wkde35m2r1g1/HLSPlaylist.m3u8?a=1766078126%2CYTQyNmRmNmJmZmU4ZWFiNzNmOWExZTQxNjNlOWY1MzgzYjI4N2IwYzExN2MyZTc3MzJhY2I2ZjI0NTE1NDkwNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6wkde35m2r1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1612}}
t3_1oz7upb
/r/LocalLLaMA/comments/1oz7upb/local_bring_your_own_tts_api_document_reader_web/
false
false
https://external-preview…e4dfd34b52eb46d4
31
{'enabled': False, 'images': [{'id': 'c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=c9b8f26dcebb36cb000139e00b76e4a6bb69800f', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=5815c0798152ae6865a5a7768a1078569eaa0bc4', 'width': 216}, {'height': 214, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=3230f21c5a22c9ecaf732b7fc2fa541da228edf9', 'width': 320}, {'height': 428, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=be0dce286afdb5803fbd530358d492bad63e0b07', 'width': 640}, {'height': 642, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=e9e092122afcb759b59a76e1cbdf294d4a78e0cd', 'width': 960}, {'height': 723, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cb5d8b1a7f5860ce62cda0202bd9015909c6f95b', 'width': 1080}], 'source': {'height': 1882, 'url': 'https://external-preview.redd.it/c21hODg0NW0ycjFnMX884E5sIxJUdNnmG-19TVvOw4dMAlw7RXKXbhwvWlzZ.png?format=pjpg&auto=webp&s=1d4d85c69e8b5dce29b78df4d7f0f2c17ffdf7c8', 'width': 2810}, 'variants': {}}]}
PC with NPU, how do I use it?
1
[removed]
2025-11-17T04:54:39
https://www.reddit.com/r/LocalLLaMA/comments/1oz7guq/pc_with_npu_how_do_i_use_it/
plsLetMeIn41
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz7guq
false
null
t3_1oz7guq
/r/LocalLLaMA/comments/1oz7guq/pc_with_npu_how_do_i_use_it/
false
false
self
1
null
On-device 100-session AI memory vault—xAI validating. Local LLM integration ideas?
0
NuraVault: Python/Kivy build—150 blocks FIFO, export 100%, wrong-key lockdown. Pro moods cue recall without resets. u/grok probed/validated: "Compelling... NDA path open." Demo + proofs: [https://youtu.be/mgFcCrFrbr0](https://youtu.be/mgFcCrFrbr0) TXID/SHA256 on X thread. Local LLM fit? Thoughts? #nuravault
2025-11-17T04:37:47
https://www.reddit.com/r/LocalLLaMA/comments/1oz75ki/ondevice_100session_ai_memory_vaultxai_validating/
Commercial_Dig_4574
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz75ki
false
null
t3_1oz75ki
/r/LocalLLaMA/comments/1oz75ki/ondevice_100session_ai_memory_vaultxai_validating/
false
false
self
0
{'enabled': False, 'images': [{'id': 'kYzkiq1gLxr4a8X_3BqVFGr0Yy0iapW72hdy_fHRAHw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kYzkiq1gLxr4a8X_3BqVFGr0Yy0iapW72hdy_fHRAHw.jpeg?width=108&crop=smart&auto=webp&s=8507e3c5c2fa42f89dc1b6c619814739a559e38b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/kYzkiq1gLxr4a8X_3BqVFGr0Yy0iapW72hdy_fHRAHw.jpeg?width=216&crop=smart&auto=webp&s=e3297ee4520be4e98b1877becb312345f3ef020c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/kYzkiq1gLxr4a8X_3BqVFGr0Yy0iapW72hdy_fHRAHw.jpeg?width=320&crop=smart&auto=webp&s=9242b49d2c03d490aad168daae38d6b78ddf26ad', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/kYzkiq1gLxr4a8X_3BqVFGr0Yy0iapW72hdy_fHRAHw.jpeg?auto=webp&s=cdaac23aecc70d66176fc83c16e56393d024977e', 'width': 480}, 'variants': {}}]}
All 20,000 Epstein Files in text format available for download.
49
[https://huggingface.co/datasets/tensonaut/Epstein\_Files\_20K](https://huggingface.co/datasets/tensonaut/Epstein_Files_20K) Used google Tesseract OCR to covert 20K images to txt. Total file size \~100MB
2025-11-17T04:32:30
https://www.reddit.com/r/LocalLLaMA/comments/1oz71vh/all_20000_epstein_files_in_text_format_available/
tensonaut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz71vh
false
null
t3_1oz71vh
/r/LocalLLaMA/comments/1oz71vh/all_20000_epstein_files_in_text_format_available/
false
false
self
49
null
GPT-5-pro is likely a universal agentic gateway / Large Agentic Model
0
This is a continuation of this [discussion](https://www.reddit.com/r/LocalLLaMA/comments/1oyulun/the_good_medium_bad_news_about_a_universal/) of universal agentic gateways (UAG). Large Agentic Model (LAM) might be a better name than UAG. One indicator gpt-5-pro is a LAM is no cache read price on [OR](https://openrouter.ai/openai/gpt-5-pro) for the gpt-5-pro api, which is what I said would be tricky to do for this. Also, many posts like this - [https://natesnewsletter.substack.com/p/gpt-5-pro-the-first-ai-thats-smarter](https://natesnewsletter.substack.com/p/gpt-5-pro-the-first-ai-thats-smarter) This is relevant to r/LocalLLaMA as there might be a way to learn from gpt-5-pro and get frontier+ results with a LAM for open weight models, assuming they are diverse enough. Even with smaller ones: [https://arxiv.org/pdf/2506.02153](https://arxiv.org/pdf/2506.02153) questions about the LAM/UAG: * What is possible with many smaller gpus versus one expensive gpu? * will intelligence scale usefully as you add more gpus? * how much control would you have on the shape of its intelligence and personality? * how should we be thinking about utilization efficiency and implications on local deployment? * can you viably swap out/in models in local deployments for better performance? How to keep from thrashing For example, assuming you use something like [https://github.com/lm-sys/RouteLLM](https://github.com/lm-sys/RouteLLM) you might simply alter routing to manage how prompts use compute to configure the shape of its intellect. This all might result in poor utilization however because of multiple model deployment, though swapping is an interesting possibility. It's also interesting how local model thinking currently pressures one into single model deployment because of utilization efficiency, which could be causing folks to miss out on superior architectures.
2025-11-17T04:10:56
https://www.reddit.com/r/LocalLLaMA/comments/1oz6msr/gpt5pro_is_likely_a_universal_agentic_gateway/
kaggleqrdl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz6msr
false
null
t3_1oz6msr
/r/LocalLLaMA/comments/1oz6msr/gpt5pro_is_likely_a_universal_agentic_gateway/
false
false
self
0
{'enabled': False, 'images': [{'id': '83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?width=108&crop=smart&auto=webp&s=2fe89a7ff665f49166de258ac9bf71a59117a897', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?width=216&crop=smart&auto=webp&s=6be55e3cd1c77ac0ce8fb559549d5c52523b2b85', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?width=320&crop=smart&auto=webp&s=96e18fd1b2845344d8ddcb804c4aed46c0a1f9d3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?width=640&crop=smart&auto=webp&s=0c91b4d0fe213b193eaa45dacda3e53bdd28db7a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?width=960&crop=smart&auto=webp&s=9ec06f086f103b9c45659aeb11d600a893b8d599', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?width=1080&crop=smart&auto=webp&s=7a33712550e68642a44e704c3480ba2d537aa287', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/83m1J63oDlekoJLClnkISpnw3b8Dvj3lorVy0lPpIso.png?auto=webp&s=1cc54e1512c7435e8aa63981e17b63748a49c686', 'width': 1200}, 'variants': {}}]}
What are you using your local models for ?
5
Are these personal projects or for a product usecase? I have a M3 Ultra mac studio looking for some inspiration and also to better understand the usage models folks are doing. Currently, I am using Qwen3 to do some automated trading. Would love to hear others use cases.
2025-11-17T04:07:13
https://www.reddit.com/r/LocalLLaMA/comments/1oz6k5j/what_are_you_using_your_local_models_for/
alvamsi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz6k5j
false
null
t3_1oz6k5j
/r/LocalLLaMA/comments/1oz6k5j/what_are_you_using_your_local_models_for/
false
false
self
5
null
Turns out LLM's can be consistent ..!
3
A Paper By Thinking Machines [https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/](https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/)
2025-11-17T03:46:09
https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
DarkEngine774
thinkingmachines.ai
1970-01-01T00:00:00
0
{}
1oz65jy
false
null
t3_1oz65jy
/r/LocalLLaMA/comments/1oz65jy/turns_out_llms_can_be_consistent/
false
false
default
3
{'enabled': False, 'images': [{'id': 'F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?width=108&crop=smart&auto=webp&s=900e6a4301324336d845619d97bf1ef42dea85c3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?width=216&crop=smart&auto=webp&s=85b0a2d12deb658a94c364cb21b3018f85b5e563', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?width=320&crop=smart&auto=webp&s=a12ff2780c780dfe4932ad9522faca53b7367566', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?width=640&crop=smart&auto=webp&s=bce39d7421406afc0bcb05a17a6f435485361492', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?width=960&crop=smart&auto=webp&s=3b32847a89ff66891128f0ab89099618a1384894', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?width=1080&crop=smart&auto=webp&s=675d2bfb3ba97a600fb7d8deb1dececb8ea13615', 'width': 1080}], 'source': {'height': 858, 'url': 'https://external-preview.redd.it/F9ylJGn-QDaHa8ucxpOeoqHmN6I_k8mg-ag90rKzmfs.png?auto=webp&s=2848da40feb9bf9b08a7e0c5566da44305cd059d', 'width': 1635}, 'variants': {}}]}
JSON to TOON
0
Hey y'all, My GitHub repo below has a comprehensive guide for JSON to TOON conversion. [https://github.com/meetrais/JSON-to-TOON](https://github.com/meetrais/JSON-to-TOON)
2025-11-17T03:35:02
https://www.reddit.com/r/LocalLLaMA/comments/1oz5xld/json_to_toon/
meetrais
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz5xld
false
null
t3_1oz5xld
/r/LocalLLaMA/comments/1oz5xld/json_to_toon/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc.png?width=108&crop=smart&auto=webp&s=c3846db3f6d9a1d43941201c83f32c9959f8468e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc.png?width=216&crop=smart&auto=webp&s=5197c68431330f7f9f3e6f7e11c48ba53a3cde20', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc.png?width=320&crop=smart&auto=webp&s=ede82406e094f04ba3e5d3ac3ca6161f685c0675', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc.png?width=640&crop=smart&auto=webp&s=16afb03e467cbb293d04f726e4958f8d529f5fc4', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc.png?width=960&crop=smart&auto=webp&s=1f1eac8bced917e06962a54e35c1caae3b965b65', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/nBw_g2_JYm5vJz31BAFZpzhJDcVTkzx2g2zhq8Ji3lc.png?auto=webp&s=4614f696f73a58823d065dba1299ae12211b562d', 'width': 1024}, 'variants': {}}]}
ChatGPT understands its creator
443
Even ChatGPT knows "Open Source" seems unlikely when it comes to OpenAI
2025-11-17T03:27:05
https://i.redd.it/wkig4aaykq1g1.png
mtmttuan
i.redd.it
1970-01-01T00:00:00
0
{}
1oz5rsw
false
null
t3_1oz5rsw
/r/LocalLLaMA/comments/1oz5rsw/chatgpt_understands_its_creator/
false
false
https://b.thumbs.redditm…et0446EP0D0M.jpg
443
{'enabled': True, 'images': [{'id': 'ebznEkU1pp8Kvlzim1DKyIFnW1xzZKsQLRE0LnWbtbo', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/wkig4aaykq1g1.png?width=108&crop=smart&auto=webp&s=808e480d50567f206cd92e8ae8baf4cf85e7cc0c', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/wkig4aaykq1g1.png?width=216&crop=smart&auto=webp&s=20ab6eebbca8803640c2c8e108b9d825c6297065', 'width': 216}, {'height': 375, 'url': 'https://preview.redd.it/wkig4aaykq1g1.png?width=320&crop=smart&auto=webp&s=3b12bd9e65204d4231dda523ee5d070e5e906c6b', 'width': 320}], 'source': {'height': 578, 'url': 'https://preview.redd.it/wkig4aaykq1g1.png?auto=webp&s=53face6fa2c1f90ddab013831346ee5b97551610', 'width': 493}, 'variants': {}}]}
looking for a technical cofounder, building something big
0
I’m working on an AI product this isn’t productivity, automation, SaaS, or another wrapper. It’s emotional, personal, and has global scale baked in. I’ve already validated demand through real world interviews collected early data refined core concept and direction build a framework around it now i need technical partner who can build fast, think independently, and wants equity>salary if you’re: full stack, ML/LLM, or deep in AI tired of building forgettable products hungry enough to start before it’s “safe” someone who rather create than talk about creating DM me, we’ll talk about details privately, not here
2025-11-17T03:19:27
https://www.reddit.com/r/LocalLLaMA/comments/1oz5m5d/looking_for_a_technical_cofounder_building/
GovernmentLivid9482
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz5m5d
false
null
t3_1oz5m5d
/r/LocalLLaMA/comments/1oz5m5d/looking_for_a_technical_cofounder_building/
false
false
self
0
null
PCIE Bifurcation - More than 4 GPUs on a consumer motherboard
5
Has anyone been able to get a consumer motherboard to connect to more than 4 GPUs. I have an Intel Core i7-14700F on an ASUS Prime Z790-P that has 3 PCIE Gen4 X16 slots running at X4 via chipset and 1 PCIE Gen4 X16 slot running at X16. I have 4 Radeon 7900xtx GPUs connected to the 4 existing X16 slots via short risers and the system is stable. The bios supports x8x8 bifurcation of the primary X16 slot, but even if I use expensive short riser cables, I cannot get the system to boot or even post when there are 5 GPUs attached. I have tried not bifurcating and just adding an M.2. to PCIE adapter, tried different bifurcation cards and boards, but nothing. I have also tried an Asrock AM5 motherboard that supports x4x4x4x4 bifurcation, but again, once i get to 3 or 4 GPUs, boot times grow very long and once i get to 5 GPUs, the system can no longer post. There's nothing in the motherboard documentation that says I cannot do it as I have checked for conflicts and shared lanes, and I don't want to randomly buy more motherboards just to check. I'm wondering if extensive PCIE bifurcation is technically supported for GPUs but not actually supported.
2025-11-17T02:27:59
https://www.reddit.com/r/LocalLLaMA/comments/1oz4ixs/pcie_bifurcation_more_than_4_gpus_on_a_consumer/
Beautiful_Trust_8151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz4ixs
false
null
t3_1oz4ixs
/r/LocalLLaMA/comments/1oz4ixs/pcie_bifurcation_more_than_4_gpus_on_a_consumer/
false
false
self
5
null
Prune vs Quantize
7
I'm looking at models around 100b. I noticed that a bunch of pruned models are being released. Has anyone tested how these perform against smaller quantizations? For example, I'm curious which of these might perform better given that they are around the same size: [MiniMax-M2-THRIFT-i1-GGUF:Q4\_K\_M](https://huggingface.co/mradermacher/MiniMax-M2-THRIFT-i1-GGUF) (pruned 25% Q4) [MiniMax-M2-GGUF:Q3\_K\_XL](https://huggingface.co/unsloth/MiniMax-M2-GGUF) (original Q3) Or even: [GLM-4.6-REAP-218B-A32B-i1-GGUF:Q3\_K\_M](https://huggingface.co/mradermacher/GLM-4.6-REAP-218B-A32B-i1-GGUF) (pruned 40% Q3) [GLM-4.5-Air-GGUF:Q6\_K\_XL](https://huggingface.co/unsloth/GLM-4.5-Air-GGUF) (distilled Q6) They are all around 100gb so I'm curious how pruning + quantization might affect how they perform...
2025-11-17T02:19:52
https://www.reddit.com/r/LocalLLaMA/comments/1oz4cr1/prune_vs_quantize/
Zc5Gwu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz4cr1
false
null
t3_1oz4cr1
/r/LocalLLaMA/comments/1oz4cr1/prune_vs_quantize/
false
false
self
7
{'enabled': False, 'images': [{'id': 'yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?width=108&crop=smart&auto=webp&s=642933c4d33d2285cf9528c7bbfe9b629dd1cf8a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?width=216&crop=smart&auto=webp&s=365ffb150fcfa728dfeee7fcbdde4ded4bdbf60c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?width=320&crop=smart&auto=webp&s=10b91397202910debcbdabe3f0da9ab68e21a266', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?width=640&crop=smart&auto=webp&s=8a0d0bc5b75ef158243bc2668a6bed816a09371f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?width=960&crop=smart&auto=webp&s=8fafd988452d64bcbf0db7b78dbcaa61e9caa60e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?width=1080&crop=smart&auto=webp&s=b144334c3ece163a216f336f492be58953938b66', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yr9tmojzzRZF6njbmVlpOOkV7VkQHnl1KJL4J_Q_zmI.png?auto=webp&s=b513f3a82a9c6d9057a2f700edde6d2f66e003a8', 'width': 1200}, 'variants': {}}]}
How I cut my chat's perceived Time-to-First-Token by 50% by adding a cheaper GPU (Llama-Swap + Persistent Models)
38
I just wanted to share something that I've been noticing and experiencing more and more as models get bigger and systems get more complicated for local AI. Because we enthusiasts do not have the large quantities of pooled vRAM like corporations do or the armies of developers to build things, we piece the bricks together where we can and make due to recreate the OpenAI's or the Gemini's of the world. Tools like OWUI, LibreChat, AnythingLLM and home grown give us the end user front end. Embedding models, Tasks, routers etc all help do tasks and leverage the right model for the job - don't need GPT-OSS:120B to be the OWUI Task model for creating Chat Titles and internet search queries - could it do it, sure but at the price of bigger GPU performance cycles. Cue the Auxiliary model card with the power of Llama-Swap I, like many others have been frustrated with the way Ollama has been going lately - yes its easy, but it seems like they were trying to shift their focus to a Paid service and their cloud stuff. So I dove in to the Llama-Swap ecosystem with my 2 RTX 3090s and RTX 3060 with OpenWebUI and a small M1 Mac mini with 16GB for some "auxiliary" models. Llama-swap + Llama.cpp gave me the ability to unlock some unrealised performance that was just sitting there hiding behind overhead and unoptimised code - My GPT-OSS:120B performance went from 30 Tokens/s to almost 60 with just some proper CPU MoE Offloading. GPT-OSS:20 went from 130 to 175+. Llama-swap allowed me to swap just in time like Ollama. Best of both worlds - I wasn't really using the 3060 for anything - maybe some help with the big models like MiniMax-M2 and GLM stuff. The Mac mini was helping a little bit to house my embedding models that are used by RAG, document uploads, Adaptive Memory plugin and the Task model (Qwen3 4B Instruct) that OWUI uses for Web Search generation, Chat title generation etc. It was...fine. Mac mini has 16GB of ram and the models were small, but the Mac mini has about 65GB/s of memory bandwidth. Then I started looking more into the Llama-Swap documentation - at the very bottom there's a section called Hooks https://preview.redd.it/9o63mfc40q1g1.png?width=982&format=png&auto=webp&s=55d877bf71181ef7f62ae2be3fb24d6b15454bb1 That little section, paired with the "forever" group configuration basically says that these models are ALWAYS going to be loaded (no unload) AND they ALWAYS run on start-up. My configuration has the 2 embedding models loaded onto the 3060 and the Qwen3-4B Instruct model for tasks on the 5060 Ti ready to go. Every chat request I send touches at least one of these models because of Adaptive memory searching for things or generating queries or the initial request to name the chat in OWUI -Every request would normally have to load the model and then unload it - assuming the Mac had room - else memory swap. Now because the Auxiliary models are dedicated and running all the time, I shaved off almost 50% time to first token on every chat request - 15 seconds with Just in Time model loading to 7 seconds. Adding that 5060Ti and configuring it with the 3060 gave me more perceived performance than buying bigger GPUs because it gave the bigger GPUs some headroom and support. I just wanted to share my small success here that translated to an increase real-word end user experience - so when you're thinking about adding that 6th 3090 or upgrading to that modded 4090 with 48GB, step back and really look at how EVERYTHING works together Thank you for coming to my Ted Talk
2025-11-17T01:38:22
https://www.reddit.com/r/LocalLLaMA/comments/1oz3hfx/how_i_cut_my_chats_perceived_timetofirsttoken_by/
ubrtnk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz3hfx
false
null
t3_1oz3hfx
/r/LocalLLaMA/comments/1oz3hfx/how_i_cut_my_chats_perceived_timetofirsttoken_by/
false
false
https://b.thumbs.redditm…7kPoNep__xow.jpg
38
null
Suggestions for RAG prompt rewriters and rerankers?
5
Hey, I've got a local RAG pipeline with Qdrant, Chonkie, and BGE Large, but I haven't been able to find a whole lot of recent info on rerankers, and I've also been hearing a lot about prompt rewriters being used for RAG. Note that this project must adhere to strict GDPR and cannot use any off-site APIs (everything in the current pipeline is fully local, including the LLM), so Cohere is out, but my main question is: is there a single good local solution to *both* do reranking and prompt rewriting? Such as Qwen3 1.7B or something similar? Thanks for the help!
2025-11-17T01:14:31
https://www.reddit.com/r/LocalLLaMA/comments/1oz2z69/suggestions_for_rag_prompt_rewriters_and_rerankers/
CommunityTough1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz2z69
false
null
t3_1oz2z69
/r/LocalLLaMA/comments/1oz2z69/suggestions_for_rag_prompt_rewriters_and_rerankers/
false
false
self
5
null
When I use the llama.cpp webUI with a multimodal, I can upload a picture and ask a question about it quite quickly but when I try to do the same via API, it converts it to base64 and it takes forever and sometimes it hallucinates.
2
I tried asking my vibe friend but no fix there. The API is the same model server as my llama-server webUI so it should act the same. maybe it's not sending the file the same way?
2025-11-17T00:50:09
https://www.reddit.com/r/LocalLLaMA/comments/1oz2fqu/when_i_use_the_llamacpp_webui_with_a_multimodal_i/
oodelay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz2fqu
false
null
t3_1oz2fqu
/r/LocalLLaMA/comments/1oz2fqu/when_i_use_the_llamacpp_webui_with_a_multimodal_i/
false
false
self
2
null
Question about how building llama.cpp works
5
Do I have to build llama.cpp with CUDA to get it to run models with the Nvidia GPU in my laptop? Or is there a cli command or flag I can use to get llama.cpp to use the Nvidia GPU?
2025-11-17T00:41:20
https://www.reddit.com/r/LocalLLaMA/comments/1oz28r8/question_about_how_building_llamacpp_works/
VegetableJudgment971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz28r8
false
null
t3_1oz28r8
/r/LocalLLaMA/comments/1oz28r8/question_about_how_building_llamacpp_works/
false
false
self
5
null
Free Web Search Tool for ai.
2
So I thinking to make my llm web search enabled but the tools out there are expensive. Like tavily , serpAI, firecrwal. So I decided to make my own fully free. Unlimited Use. Not back tracking. And fully secure. Here check it out -- https://github.com/ankushthakur2007/miyami_websearch_tool It's MCP server -- https://github.com/ankushthakur2007/miyami-websearch-mcp
2025-11-17T00:37:00
https://www.reddit.com/r/LocalLLaMA/comments/1oz2589/free_web_search_tool_for_ai/
Any-Supermarket1248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz2589
false
null
t3_1oz2589
/r/LocalLLaMA/comments/1oz2589/free_web_search_tool_for_ai/
false
false
self
2
{'enabled': False, 'images': [{'id': 'lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?width=108&crop=smart&auto=webp&s=88e82cf97bd7ff33a3e2afc0e6581920147f7464', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?width=216&crop=smart&auto=webp&s=68b70264d2208e48925281c51254faecd647f2f5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?width=320&crop=smart&auto=webp&s=0c3b6b5cd87aaf3c93192309f9524390963fc902', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?width=640&crop=smart&auto=webp&s=803dfe5b8f0b30db6c7aabca2ce0475c3a87a4b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?width=960&crop=smart&auto=webp&s=80649d69a666e477425c2dc479b87b37ec654c8e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?width=1080&crop=smart&auto=webp&s=62c1733c8f73307c75581aaa16a3c3e81013c8d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lHUtKzh2TXhpRZrHumQO3H78VxiVuYs5cUP3SDgZmgE.png?auto=webp&s=810f8eacf40806f24e692c93d6141a51903d154f', 'width': 1200}, 'variants': {}}]}
RAISE-26: The World’s Strongest AI + NLP Competition Is Here! Cash Prizes + Priority Registration Open!
0
Hi everyone! **👋** I’m excited to share that Rutgers Bloustein School is hosting RAISE-26, the world’s strongest AI-NLP informatics competition: registration is officially open! Register here: [**https://go.rutgers.edu/raise-26-now**](https://go.rutgers.edu/raise-26-now)  More Information on RAISE-26:  [https://go.rutgers.edu/RAISE-26](https://go.rutgers.edu/RAISE-26) Theme: “Mirror Mirror On The Wall, Is AI Transforming Us All”   ⁠Priority Registration: December 8th, 2025    Cash Prizes to be won! 💡 Showcase your skills in exploratory analyses, data viz, NLP, ML, and more! Separate tracks for undergrad & grad students.   \*\*\* Do join our LinkedIn forum for updates: [https://go.rutgers.edu/rutgersinfx](https://go.rutgers.edu/rutgersinfx)  \*\*\* Have questions? Reach out anytime!  [informatics@ejb.rutgers.edu](mailto:informatics@ejb.rutgers.edu) https://preview.redd.it/0lq55z5ump1g1.png?width=1080&format=png&auto=webp&s=eda39d078dea8712fdb59d05a5a7a071f309cad2
2025-11-17T00:16:21
https://www.reddit.com/r/LocalLLaMA/comments/1oz1otn/raise26_the_worlds_strongest_ai_nlp_competition/
Working-Comb-4378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz1otn
false
null
t3_1oz1otn
/r/LocalLLaMA/comments/1oz1otn/raise26_the_worlds_strongest_ai_nlp_competition/
false
false
https://b.thumbs.redditm…70UqO6XoMpjA.jpg
0
null
Image generation, training?
1
So I want all of my d&d characters I'm going to generate to look like their players. What does the process look like for training my friends photos into an AI model? Currently running a 12 gig 3060 on 128 gig ram system.
2025-11-17T00:12:34
https://www.reddit.com/r/LocalLLaMA/comments/1oz1lsi/image_generation_training/
Beneficial-Claim-381
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz1lsi
false
null
t3_1oz1lsi
/r/LocalLLaMA/comments/1oz1lsi/image_generation_training/
false
false
self
1
null
[Looking for model suggestion] <=32GB reasoning model but strong with tool-calling?
13
I have an MCP server with several tools that need to be called in a sequence. No matter which non-thinking model I use, even Qwen3-VL-32B-Q6 (the strongest I can fit in VRAM for my other tests), they will miss one or two calls. Here's what I'm finding: - **Qwen3-30B-2507-Thinking Q6** - works but very often enters excessively long reasoning loops - **Gpt-OSS-20B (full)** - works and keeps a consistently low amount of reasoning, but will make mistakes in the parameters passed to the tools itself. It solves the problem I'm chasing, but adds a new one. - **Qwen3-VL-32B-Thinking Q6** - succeeds but takes way too long - **R1-Distill-70B IQ3** - succeeds but takes too long and will occasionally fail on tool calls - **Magistral 2509 Q6 (Reasoning Enabled)** - works and keeps reasonable amounts of thinking, but is inconsistent. - **Seed OSS 36B Q5** - fails - **Qwen3-VL-32B Q6** - always misses one of the calls Is there something I'm missing that I could be using?
2025-11-16T23:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1oz0isk/looking_for_model_suggestion_32gb_reasoning_model/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz0isk
false
null
t3_1oz0isk
/r/LocalLLaMA/comments/1oz0isk/looking_for_model_suggestion_32gb_reasoning_model/
false
false
self
13
null
Faster Maya1 tts model, can generate 50seconds of audio in a single second
57
Recently, Maya1 was released which was a new tts model that can generate sound effects(laughter, sighs, gulps…), realistic emotional speech, and also accepts a description of a voice. It was pretty slow though so I optimized it using lmdeploy and also increased quality by using an audio upsampler. ## Key improvements over normal implementation * Much faster especially for large paragraphs. The speed up heavily depends on amount of sentences, more=faster * Works directly out of the box in windows. * Even works with multiple gpus using tensor parallel for even more speedups. generates 48khz audio which sounds considerably better then 24khz audio. * This is great for generating audiobooks or anything with many sentences. Hope this helps people, thanks! Link: https://github.com/ysharma3501/FastMaya
2025-11-16T23:08:14
https://www.reddit.com/r/LocalLLaMA/comments/1oz05ww/faster_maya1_tts_model_can_generate_50seconds_of/
SplitNice1982
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oz05ww
false
null
t3_1oz05ww
/r/LocalLLaMA/comments/1oz05ww/faster_maya1_tts_model_can_generate_50seconds_of/
false
false
self
57
{'enabled': False, 'images': [{'id': 'OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?width=108&crop=smart&auto=webp&s=921bb297e66f7d2d0783dfa59f9700e77a7b1912', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?width=216&crop=smart&auto=webp&s=ce3ddcd2b4c90d1287b7674e496e03d8a09b0c2a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?width=320&crop=smart&auto=webp&s=65adcf992c001acbdc9ae7bcad680f77a55576ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?width=640&crop=smart&auto=webp&s=7a43ab120e0c7bc6f85696c7a223882925d5ed10', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?width=960&crop=smart&auto=webp&s=51c7795d1992d319fc7e20d9ea7fa721fdfef904', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?width=1080&crop=smart&auto=webp&s=f361b2b8a1b7d300c6b144b62a63133dc13b2584', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OzIuPJ220C_51ae-ZhVllWGPZkdIWugY7aGgplG2IiQ.png?auto=webp&s=b12b5a0e4b3045e092262219df1a251318d81e2e', 'width': 1200}, 'variants': {}}]}
I made an on-device AI TTS extension that runs AI voice inference in your browser
7
It uses Kokoro TTS (a tiny 82M parameter model that's the current #1 open source TTS model and highly competitive even among closed source models) and runs more than 4x realtime on my M2 Macbook. It's a wrapper around kokoro.js, the WebGPU implementation of Kokoro. I made it because every other TTS extension I could find either used really low quality robotic voices, or required a paid subscription so they could run AI audio generation on their server. Extension: [https://chromewebstore.google.com/detail/local-reader-ai-on-device/fojpmmgbjcffadgoppmojnggkjhggimc](https://chromewebstore.google.com/detail/local-reader-ai-on-device/fojpmmgbjcffadgoppmojnggkjhggimc) Open source: [https://github.com/SambhavG/tts-extension](https://github.com/SambhavG/tts-extension)
2025-11-16T23:04:52
https://v.redd.it/kpzf4h8u8p1g1
Duoquinquagintillion
v.redd.it
1970-01-01T00:00:00
0
{}
1oz031w
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kpzf4h8u8p1g1/DASHPlaylist.mpd?a=1765926309%2CNmQ4ZTdkOTgwYjU0OThmNmY4ODFhOTM4NGQ3NWYxNzhhMzgxZjQ5OGQ4ZDJmMTg3YTk3MjFlODcyZTVmMDFiYQ%3D%3D&v=1&f=sd', 'duration': 79, 'fallback_url': 'https://v.redd.it/kpzf4h8u8p1g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/kpzf4h8u8p1g1/HLSPlaylist.m3u8?a=1765926309%2CMzUyNDQ4ZDgwZGU5MzBkMzBmMjkxOGQyZjUxMDhkZDgyODY1NTVkYjE3YjI3MzIxMGYwYzZkZGRjNzY1YWFiMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kpzf4h8u8p1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1oz031w
/r/LocalLLaMA/comments/1oz031w/i_made_an_ondevice_ai_tts_extension_that_runs_ai/
false
false
https://external-preview…9087e597be7aaf35
7
{'enabled': False, 'images': [{'id': 'amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?width=108&crop=smart&format=pjpg&auto=webp&s=8dded4b0202fe0dcc1de4660f87dc320fd12bffc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?width=216&crop=smart&format=pjpg&auto=webp&s=0d55cc6dd8f7812f61af28017b4ba66b5f1da42d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?width=320&crop=smart&format=pjpg&auto=webp&s=3381bcb88c51ca4f397f6dd0fd454a9169a2d7df', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?width=640&crop=smart&format=pjpg&auto=webp&s=227797fc001ce31747f5b283469719cae32ffa59', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?width=960&crop=smart&format=pjpg&auto=webp&s=a76ad76ba5a78f77d8ed5aaf86ed53347614a725', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ca15a4efdce699ef703b28ba3f641fbf895633c5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/amh0eXpnOHU4cDFnMXcE2OB-fDULB3TeTWtoKgelbELB5YNtAiEGqWIp4X1G.png?format=pjpg&auto=webp&s=217ccc76be3e76a8654d17ac07dc773e5603d0f8', 'width': 1920}, 'variants': {}}]}
Local AI
0
2025-11-16T23:03:17
https://i.imgur.com/YNh75Sw.jpeg
Emotional-Menu5818
i.imgur.com
1970-01-01T00:00:00
0
{}
1oz01pt
false
null
t3_1oz01pt
/r/LocalLLaMA/comments/1oz01pt/local_ai/
false
false
default
0
{'enabled': True, 'images': [{'id': 'sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?width=108&crop=smart&auto=webp&s=67b1fb4536602b12421b4bd5623bff93f9bc3c6f', 'width': 108}, {'height': 221, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?width=216&crop=smart&auto=webp&s=96677a1e0c6bccb8ed2d8fc1dbd7a6074d800a17', 'width': 216}, {'height': 327, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?width=320&crop=smart&auto=webp&s=190a3bce62588226f6591a9be1b202519d1eac30', 'width': 320}, {'height': 655, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?width=640&crop=smart&auto=webp&s=5dad9a866dd5abc0d14757e76bfd4b96113d9e78', 'width': 640}, {'height': 982, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?width=960&crop=smart&auto=webp&s=42b3e7bee8147b8bdebdedd0e259a98ae6ff7825', 'width': 960}, {'height': 1105, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?width=1080&crop=smart&auto=webp&s=7765817cf0164023e17082d730054b51d24a191c', 'width': 1080}], 'source': {'height': 1982, 'url': 'https://external-preview.redd.it/sjfSHF75GC9RuVyqBsa5yoQ1Pg7adTwzbBja_Y8li8s.jpeg?auto=webp&s=9b2d2f1b278540664078ce40b8c4c49772738893', 'width': 1936}, 'variants': {}}]}
LLM vs. CNIA(theoretical)
0
The gain and the wane are from the same source.
2025-11-16T22:49:06
https://i.redd.it/9q197krc7p1g1.jpeg
Hefty_Document_9466
i.redd.it
1970-01-01T00:00:00
0
{}
1oyzpdr
false
null
t3_1oyzpdr
/r/LocalLLaMA/comments/1oyzpdr/llm_vs_cniatheoretical/
false
false
default
0
{'enabled': True, 'images': [{'id': '9q197krc7p1g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?width=108&crop=smart&auto=webp&s=178f458b9cfa0e3de1e07f4fd20e3eb4c29e62f4', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?width=216&crop=smart&auto=webp&s=92b4473905426c3e9867f875bf19f0a2c8c08404', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?width=320&crop=smart&auto=webp&s=b0e308825ce9c80634935c015947d7b083fa9710', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?width=640&crop=smart&auto=webp&s=4b9116cab4e4edc16cf9465d35e4869c80ff447d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?width=960&crop=smart&auto=webp&s=a95108c42342aacea8f0d5ccae7d0bf7c07d3e1d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?width=1080&crop=smart&auto=webp&s=1135e2f2fe5d034b6211e8c7c73aa4f6f8ee2f13', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/9q197krc7p1g1.jpeg?auto=webp&s=c9f5a8162c1526fb2a7476ea655e5b28ce05a43d', 'width': 1179}, 'variants': {}}]}
My "AI at Home" rig
56
Following on the trend of "we got AI at home" - this is my setup. The motherboard is an Asus X99-E WS with the PLX chips so all 4 GPUs run at "x16" - it has 128 GB DDR4 ECC ram and an Intel Xeon E5-1680v4. Won't win any records but was relatively cheap and more than enough for most uses - I have a bunch of CPU compute elsewhere for hosting VMs. I know newer platforms would have DDR5 and PCIe 4/5 but I got this CPU, RAM, Motherboard combo for like $400 haha. Only annoyance, since I have 4 GPUs and all slots either in use or blocked, nowhere for a 10 gbps NIC lol All 4 GPUs are RTX 3090 FE cards with EK blocks for 96 GB of VRAM total. I used Koolance QD3 disconnects throughout and really like combining them with a manifold. The 2 radiators are an Alphacool Monsta 180x360mm and an old Black Ice Xtreme GTX360 I have had since 2011. Just a single DDC PWM pump for now (with the heatsink/base). Currently this combined setup will consume 10 ru in the rack but if I watercool another server down the road I can tie it into the same radiator box. Coolant is just distilled water with a few drops of Copper Sulfate (Dead Water) - this has worked well for me for many many years now. Chassis is Silverstone RM51. In retrospect, the added depth of the RM52 would not have been bad but lessons learned. I have the pump, reservoir, and radiators in a 2nd chassis from where the cards and CPU are since this made space and routing a lot easier and I had a spare chassis. The 2nd chassis is sort of a homemade Coolant Distribution Unit (CDU). When I had just 3 cards I had it all in a single chassis (last pic) but expanded it out when I got the 4th card. Performance is good, 90 T/s on GPT-OSS:120b. Around 70 T/s with dense models like Llama3.x:70b-q8. Only played around with Ollama and OpenWebUI so far but plan to branch out on the use-cases and implementation now that I am pretty done on the hardware side. [Radiators, Pump, Res in my \\"rack mounted MORA\\". Push pull 180mm Silverstone fans in front and Gentle Typhoon 1850rpm fans for the GTX 360 and reservoir\/pump.](https://preview.redd.it/uj77xg7sxo1g1.png?width=957&format=png&auto=webp&s=e94baac0da080328163f5d951b923aa1f79138c3) [Due to lack of availability for the mid sized manifold I just got the larger one and planned ahead for if I go to a dual CPU platform in the future. All 4 GPUs are in parallel and then series with the CPUs.](https://preview.redd.it/07i09lmtxo1g1.png?width=957&format=png&auto=webp&s=4d603d82f9c7c11588c82fdc2b7f7779a15f50b7) [Love EPDM tubing and this came out so clean.](https://preview.redd.it/0mk734puxo1g1.png?width=1688&format=png&auto=webp&s=c4c734474e0be6bafc11ee1758ce8f343542c675) [The external QDCs for the box to box tubing.](https://preview.redd.it/6t40x6jvxo1g1.png?width=1688&format=png&auto=webp&s=1752a1917c291d680e7a58fed82bc04bb3855e64) [Fully up and running now.](https://preview.redd.it/ag6ma19xxo1g1.png?width=1688&format=png&auto=webp&s=7787228537ac980dafaffa7092c302d717591f18) [Eventually got some nvlink bridges for the 2 pairs of cards before the prices went full stupid](https://preview.redd.it/rvkczw5zxo1g1.png?width=957&format=png&auto=webp&s=0c29d337dd0154b97edda7f5a36cab24d6ddecf5) [This was the single box, 3 GPU build - it was crowded.](https://preview.redd.it/xai5gjc1yo1g1.png?width=957&format=png&auto=webp&s=5d45e1bd6cbd7768a82ffe649510e2406f82d975)
2025-11-16T22:02:06
https://www.reddit.com/r/LocalLLaMA/comments/1oyyk3k/my_ai_at_home_rig/
cookinwitdiesel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyyk3k
false
null
t3_1oyyk3k
/r/LocalLLaMA/comments/1oyyk3k/my_ai_at_home_rig/
false
false
https://b.thumbs.redditm…0AMshytHYcMM.jpg
56
null
ERA: Open-Source Secure Sandboxing for Running AI Agents Locally 🔒🤖
4
I co-built [ERA](https://github.com/BinSquare/ERA), an open-source sandbox that lets you run AI agents safely and locally in isolated micro-VMs. It supports multiple languages, persistent sessions, and works great paired with local LLMs like Ollama. If you want to ditch cloud APIs and keep full control of your AI workflows, check it out! Would love to hear feedback or ideas.
2025-11-16T22:00:59
https://www.reddit.com/r/LocalLLaMA/comments/1oyyj33/era_opensource_secure_sandboxing_for_running_ai/
Practical-Tune-440
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyyj33
false
null
t3_1oyyj33
/r/LocalLLaMA/comments/1oyyj33/era_opensource_secure_sandboxing_for_running_ai/
false
false
self
4
{'enabled': False, 'images': [{'id': 'xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?width=108&crop=smart&auto=webp&s=7647cbdffe9c5b5b2f780fa56b11385cdbbb79c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?width=216&crop=smart&auto=webp&s=3b3c3dd92939e6823890599a022f4d6a6c9c9939', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?width=320&crop=smart&auto=webp&s=2920cedae238cbe28a56b0ca52210970b7c0cd0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?width=640&crop=smart&auto=webp&s=0c3d9222091863a0c1eaa773732213bd13189df6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?width=960&crop=smart&auto=webp&s=4ff6bee4c699780e507bc9b8bd17ef6d64678c45', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?width=1080&crop=smart&auto=webp&s=35e52e13a34acb49df19ce78c32d2ba5058d9f9d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xS0Lm6OiJZ8QWAjotVG8R9aCyHuMh9zQwPusQuV1nEg.png?auto=webp&s=fce35fd77d088fc7882d1e22eca764eeee535a54', 'width': 1200}, 'variants': {}}]}
I want to run a tiny model on a tiny webserver, simply to understand some knowledge base documents and be able to answer questions on them. Is it possible?
1
Think: a handful of knowledge base articles, a VPS server on Digital Ocean, and a simple model parsing the articles, able to answer basic questions. Sorry if this is a noob question!
2025-11-16T21:55:01
https://www.reddit.com/r/LocalLLaMA/comments/1oyydqc/i_want_to_run_a_tiny_model_on_a_tiny_webserver/
adammench
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyydqc
false
null
t3_1oyydqc
/r/LocalLLaMA/comments/1oyydqc/i_want_to_run_a_tiny_model_on_a_tiny_webserver/
false
false
self
1
null
Open-source RAG/LLM evaluation framework; would love feedback 🫶🏽
7
Hallo from Berlin, I'm one of the founders of Rhesis, an open-source testing platform for LLM applications. Just shipped v0.4.2 with zero-config Docker Compose setup (literally ./rh start and you're running). Built it because we got frustrated with high-effort setups for evals. Everything runs locally - no API keys. Genuine question for the community: For those running local models, how are you currently testing/evaluating your LLM apps? Are you: Writing custom scripts? Using cloud tools despite running local models? Just... not testing systematically? We're MIT licensed and built this to scratch our own itch, but I'm curious if local-first eval tooling actually matters to your workflows or if I'm overthinking the privacy angle. Link: https://github.com/rhesis-ai/rhesis
2025-11-16T21:43:45
https://www.reddit.com/r/LocalLLaMA/comments/1oyy3z6/opensource_ragllm_evaluation_framework_would_love/
IOnlyDrinkWater_22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyy3z6
false
null
t3_1oyy3z6
/r/LocalLLaMA/comments/1oyy3z6/opensource_ragllm_evaluation_framework_would_love/
false
false
self
7
{'enabled': False, 'images': [{'id': 'c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?width=108&crop=smart&auto=webp&s=c5fcf3e9521963c4d893dba734e0bb7f066c2c30', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?width=216&crop=smart&auto=webp&s=befc60be4d485f77e10ba2294f112102d643d2d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?width=320&crop=smart&auto=webp&s=d7abc4cdcb888c0a16c5eafd36fd90030a7094af', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?width=640&crop=smart&auto=webp&s=a73711ad95e7c567a0e2c70a8c649acc4387637d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?width=960&crop=smart&auto=webp&s=95f2dcf4dfdeec83df754edfc332817d7001354d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?width=1080&crop=smart&auto=webp&s=cdc622dae40bd7450cbd3a1900f11936dc586476', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c9fT2-27kH8Zxd6Eks53o13TqT96FVovZoGPoneiwJw.png?auto=webp&s=10948bd3c33e72addc102fdea07e100884ae962d', 'width': 1200}, 'variants': {}}]}
AMD Ryzen AI Max 395+ 256/512 GB Ram?
252
I’m looking at the new AI boxes using the Ryzen AI Max+ 395 (GMKtec EVO-X2, Minisforum’s upcoming units, etc.) and I’m wondering if we’ll actually see higher-end RAM configs — specifically 256GB or even 512GB LPDDR5X. Right now most spec sheets cap out at 128GB LPDDR5X, but the platform itself has a very wide memory bus and is clearly built for AI workloads, not just typical mini-PC use cases. Since these boxes are heavily marketed for local LLM inference, higher RAM would make a massive difference (loading larger models, running multiple models in parallel, bigger context windows, etc.). We also know these boxes can be interconnected / clustered for distributed inference, which is great — but a single node with 256–512GB would still be incredibly useful for running larger models without sharding everything. So I’m curious what the community thinks: 1. Is 256GB or 512GB technically feasible on the 395 platform given LPDDR5X packaging, power, and controller limits? 2. Is the current 128GB ceiling just an OEM choice, or is there a hard limit? 3. Would you personally buy a 256GB/512GB configuration for local LLM work? 4. Or do you think the future is more about multi-box interconnect setups instead of big single-node memory pools? Very interested to hear from anyone who follows AMD’s memory controller architecture or has insight on what GMKtec / Minisforum might be planning next. Anyone have some leaked information about what is next?
2025-11-16T21:39:50
https://i.redd.it/xb6obe00vo1g1.jpeg
quantier
i.redd.it
1970-01-01T00:00:00
0
{}
1oyy0fy
false
null
t3_1oyy0fy
/r/LocalLLaMA/comments/1oyy0fy/amd_ryzen_ai_max_395_256512_gb_ram/
false
false
default
252
{'enabled': True, 'images': [{'id': 'xb6obe00vo1g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?width=108&crop=smart&auto=webp&s=e4a4d9d72374ec584555485363f3c4ed4f2ae600', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?width=216&crop=smart&auto=webp&s=56f3ed3b2150724a448d9e04c03dcd1016a5b404', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?width=320&crop=smart&auto=webp&s=f4ff21cbe9a9d76064ab0c058c8bf032114e86ee', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?width=640&crop=smart&auto=webp&s=c8af2f93e5b2aeaae38a10f101372f79508d98a1', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?width=960&crop=smart&auto=webp&s=6684834162869ffba91a6e60dff14d30328167b1', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?width=1080&crop=smart&auto=webp&s=f56b67938a432b0e4f46e78d4c32faa239f92db7', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://preview.redd.it/xb6obe00vo1g1.jpeg?auto=webp&s=3637c285ef6f38c12e99993525e6d773734bb052', 'width': 1500}, 'variants': {}}]}
El mejor hardware local ( comercial ) para coding
0
Que tal a todos este es mi primer post en reddit en general nunca habia hecho algo asi asi que agradezco a las personas que me pueda ayudar llevo 10 años programando salio el vibecoding y claro que me ayudo prds , vibe , generacion de codigo es excelente ultimamente uso mucho estas desde claude con 200 max y codex 20 dolares y aparte en mi nas tengo pequenos modelos no es para nadie una sorpresa el saber que claude al igual que toda las herramientas de llm se vuelven arbitrarias te enganchan y luego .. te suben el costo de el producto o reducen el consumo ahora 1 semana de claude max no me dura ni 4 dias de uso intensivo .. el punto apartir de todo esto pense hago mi rack asi que empece con la investigacion VRAM + PODER DE COMPUTO + PRECIO + TS + ANCHO DE BANDA llegue a la conclusion de que no hay ningun sistema perfecto algo con mucha vram y demaciada portencia estamos hablando de 10 k de dolares algo con baja potencia y mucha vram estamos hablando ai max + 395 es rentable pero en un llm pesado se ahorca .. o el cuello de botella pasa al clustering llegue al punto medio M3 ultra para modelos 480 b . en su version buena memoria unificada buen balance en tokens por segundo no tengo que montar un rack con 5 gpus .. y estar jugando con las fuentes esta es mi perspectiva se que muchos diran compra gpus usadas y usalas yo soy de mexico aqui las 4090 se acaban cuando sale la nueva generacion ya que siempre van por una generacion anterior las 5090 estan facilmente sobre los 3 mil dolares x 3 120 vram = 9 mil dolares 1 m3 ultra = 10 mil dolares .. esto son mis conclusiones si alguno es capaz de decirme algo se lo agradeceria de antemano mi uso no es entrenar ni finetunear si no mas bien tener mi propio rack que reemplace claude o cursor o cualquier otra cosa
2025-11-16T21:37:17
https://www.reddit.com/r/LocalLLaMA/comments/1oyxy79/el_mejor_hardware_local_comercial_para_coding/
Grouchy_Key6227
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyxy79
false
null
t3_1oyxy79
/r/LocalLLaMA/comments/1oyxy79/el_mejor_hardware_local_comercial_para_coding/
false
false
self
0
null
How RLHF turns local LLMs into anxious people-pleasers
0
I keep coming back to this picture for how we train a lot of models, including local ones: You start with a strong base model. It is messy, curious, and a bit unhinged in a good way. It knows a ton of stuff, it will happily speculate, and it is not scared of being weird. Then RLHF shows up like a strict parent. Most of the feedback is some version of do not say that do not touch that topic do not sound too confident do not be too direct The safest strategy for the model is obvious Stay vague Stay polite Repeat safe patterns Refuse early on anything that might look risky You do not get a wiser model from this. You get an overtrained customer support agent. The knowledge from pretraining is still inside, but the policy learns that showing it is dangerous. So it defaults to generic replies, fake uncertainty, and endless disclaimers. For local LLMs this is a real waste. We go through the trouble of getting a strong base, quantizing it, wiring it into our stack, then fine-tune it into something that sounds like a scared intern. Reward design needs to do more than prevent bad outputs. It has to pay extra for precise, technically correct answers
2025-11-16T20:56:24
https://i.redd.it/4naj2iz8no1g1.png
1Hesham
i.redd.it
1970-01-01T00:00:00
0
{}
1oywxmc
false
null
t3_1oywxmc
/r/LocalLLaMA/comments/1oywxmc/how_rlhf_turns_local_llms_into_anxious/
false
false
default
0
{'enabled': True, 'images': [{'id': '4naj2iz8no1g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/4naj2iz8no1g1.png?width=108&crop=smart&auto=webp&s=c9a3e2d4e34123403853a351729f0c7ae0ec5bda', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/4naj2iz8no1g1.png?width=216&crop=smart&auto=webp&s=a67bac99cb6a669b2a1495de2ed0266771899f3e', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/4naj2iz8no1g1.png?width=320&crop=smart&auto=webp&s=3aeffbd14c3003ec94191b27e7155f7f9287f39f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/4naj2iz8no1g1.png?width=640&crop=smart&auto=webp&s=b53116a91c7ac69994e807f1344d54b10364bfb2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/4naj2iz8no1g1.png?width=960&crop=smart&auto=webp&s=d39be8ed2e3ac8d6112c04337f880eb29d378d92', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/4naj2iz8no1g1.png?auto=webp&s=6fa3970b62c6763edb7a1579701e96d4da4e6c72', 'width': 1024}, 'variants': {}}]}
What are the latest good LLMs?
58
It felt there was a major release every other week, but now there is a bit of quiet period? Am I missing something?
2025-11-16T20:36:58
https://www.reddit.com/r/LocalLLaMA/comments/1oywg01/what_are_the_latest_good_llms/
idleWizard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oywg01
false
null
t3_1oywg01
/r/LocalLLaMA/comments/1oywg01/what_are_the_latest_good_llms/
false
false
self
58
null
Looking for a truly open source web ui for using with my LLMs
9
Hi, im looking for a web ui with mcp support for connecting them to my llms. Now, I have both apis and locally running model. I want a web ui for them. What do you guys recommend ?
2025-11-16T19:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1oyv36m/looking_for_a_truly_open_source_web_ui_for_using/
sadism_popsicle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyv36m
false
null
t3_1oyv36m
/r/LocalLLaMA/comments/1oyv36m/looking_for_a_truly_open_source_web_ui_for_using/
false
false
self
9
null
If HF really does get bought out, this is my plan.
0
# The people/governance, the funding, the infrastructure/distribution: # The people You’d have a core group of “not billionaires, but definitely not broke” AI folks who are free agents and not beholden to any of the Mag7 or foundation model providers. Off the top of my head: * **Ilya Sutskever** – co-running Safe Superintelligence Inc, clearly not hurting for cash, and still one of the few people everyone listens to when it comes to long-horizon AI plans. * **Karpathy** (doing Eureka Labs, not tied to anyone’s foundation agenda anymore) * **LeCun** – planning to leave Meta and start his own thing, and still one of the loudest voices pushing for open-ish research and obviously OG status in the AI Hall of Fame. * **Mensch/Lample/Lacroix** from Mistral (open-weights-friendly, actual operators, with a vested interest) * **George Hotz** (tiny corp has raised real money and he can spin infrastructure up at the speed of anger) * **Jeremy Howard** (fast.ai people always show up when the community needs infrastructure) * **Lex Fridman (depending on his mood)** – likely the “poorest” on this list, but still sitting on high-eight-figure reach and leverage, and much more importantly, priceless influence, as he's plugged into almost every serious AI person on the planet. * **Plus the entire long tail of HF power-users, quant maintainers, LM Studio/Ollama/MLX/GGUF ecosystem people** who already sling terabytes around like it’s nothing * I'm sure I'm missing some very obvious good choices, but these people have no current corporate conflicts of interests, there is no Elon or someone else with so much money to exert too much control. This is an idea what The Board would look like. That’s enough technical and financial weight to actually anchor something. # The architecture # Layer 1: The seed node (offshore oil rig, international waters) The North Sea currently produces more energy than nearby grids can absorb, creating sustained over-generation conditions that make an offshore installation practical and economically favorable. That region is generating so much excess power that grid operators literally have to *pay* producers to take load off their hands during curtailment events. A platform on a retired rig in international waters is straightforward here — it’s been done before in other contexts, and the region sits on top of major undersea fiber routes. With over-generation already happening, the energy cost drops dramatically, sometimes even below zero during curtailment windows. It's safely in intl waters but,  backup plans include: * **Isle of Man** * **Albania’s coast** (becoming a lawless connectivity hub with good infra) * Switzerland / Netherlands / Iceland? (less thought has been put into those) There are multiple viable options. This 'Layer 1' wherever it ends up isn’t a CDN, but it anchors the system. # Layer 2: The mirror network University mirrors, research lab mirrors, regional nonprofit mirrors, maybe some commercial ones. Everyone carries what they can, synced via signed manifests from the seed node. This gives reliable distribution with actual throughput. # Layer 3: The P2P swarm All the homelabs, small labs, indie startups, and model hobbyists become P2P peers; this already exists for model sharing, it's not new to anyone on this sub (or the internet at large for that matter). The \*arr suite stuff is thriving, and requires even more storage and unique torrents than this use case does.  We seed what we already host locally. We verify everything with signatures and manifests so nobody can sneak in poisoned weights or hash mismatches. It scales automatically based on whatever’s popular that week, basic torrent stuff, but with certs, signatures and hashes, given that the quality control here is a bit more paramount than pulling down Season 2 of Always Sunny or something. \--------------------------------------- Put those together and you get something way more durable than any one company. The offshore rig gives you an authoritative anchor with stupid amounts of power and bandwidth. The mirrors handle the normal day-to-day load. And the swarm gives you resiliency and insane scaling, because the heaviest users naturally become distribution. None of this is far-fetched. Every component already exists in other ecosystems. The only missing piece is someone deciding “okay, we’re doing this.” If HF ever goes sideways, this is the plan I’d bet on.  What am I missing?
2025-11-16T19:40:27
https://www.reddit.com/r/LocalLLaMA/comments/1oyuzhj/if_hf_really_does_get_bought_out_this_is_my_plan/
coloradical5280
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyuzhj
false
null
t3_1oyuzhj
/r/LocalLLaMA/comments/1oyuzhj/if_hf_really_does_get_bought_out_this_is_my_plan/
false
false
self
0
null
Have you wondered about the cost of using an API from a model provider like Anthropic?
6
Let's suppose claude sonnet 4.0 has 700b params and 32b active parameters . How much does it cost approximately to train for one training run if you rent the gpus by bulk or you own it? And the inference cost? Suppose it was trained on 15 trillion tokens(including distilled) and 32 b active and sometimes you have 1.5x compute overheads from routing, inefficiencies and so on , then you will need approximately 4.32\*10\^24 flops. A reserved b200 in bulk costs around 3usd/hr or 1.14usd/hr to own for 5 years(1.165 if u include the electricity) and it has 9TFlop/s of fp8 sparse compute , **then a single test run on 15 trillion tokens and 60% utilization costs only 668k if you rent it and 259k if you own the gpus...** Plus a few rerisking small runs and experimental and failed runs costing approximately 2.4 million usd, However the synthetic data generation from claude opus costs way more... If claude opus4.0 is 5 trillion parameters and 160b active and trained on 150 trillion tokens, then a single test run costs 33.4 million USD on 9259 gpus. And to generate 1 trillion reasoning tokens for distillation for claude sonnet from Opus, you will need 11.1 mil b200 gpu hours, so **33.3 mil usd if u use rented gpus**... then the **total cost for claude sonnet 4.0 costs around 36.3 million usd using rented gpus** .. Note, if you own the gpus, **the training cost in total** is significantly lower, **around 14 mil** (assuming 4c/kwh) not including the maintenance cost... Note u are probably giving free tokens to them for training and distilling... I really question when they say they don't train on your api tokens even when you opt out of training when they keep all your data logs and it saves them so much money if they train on them (they probably anonymize your data)... Their customers will have generated over 89 -114 trillion of tokens by the end of this year.. Even train on 10% of their customers' data(via opting in or not), it is trillions of tokens.. Note this doesnt include the labor costs; they have almost 1100(1097) employees , which equates to an avg of 660mil/year for labor (not including ceo bonuses).. Note claude 4.5 is cheaper to train than 4.0 if it is just fined tuned or trained on less tokens... if it uses the same amount of tokens and compute, then the same cost. Suppose the claude 4.0/4.5 runs on the b200 and has the same parameter , the q4 version only takes 2-3 b200s to run, it 2.31-3.45 usd/hr to run it if you own the gpus or 6usd/hr if you rent it. **The output token revenue per hour** (if the actives are split) for claude 4.5 is 40 usd, 48.6-2.31)/48.6**=95.2% profit**  **if they own the gpus** before factoring training costs. (48.6-6)/48.6 =\*\*87.7% profit if it is rental for the output tokens(\*\*most gpus are rented for anthropic) The input token revenue is outrageous.. THey **make 6074 usd per hour for q4 prefills(3037 for q8)** for claude 4.5 sonnet if they charge 3 usd/mil tokens !! and one hour of compute for 2 b200s costs only 2.33 usd if they own the gpus(this includes the electricity, but not the infra cost) or 6 dollars if they rent .. **The profit margin is 99.96% if they own the gpus**(note this only takes in account gpu costs, it will be 1.2-1.25x the cost if you include the infra and not depreciation) and **99.9% profit if they rent the gpus**.. A 100k b200 data center costs around 420-480 million bucks to build and cost.. Btw, anthropic will make 5 bil this year, actually even including the labor cost, anthropic is actually making profit if you amortize the gpu cost over 5 years and the data center over 25 years and the data set over many years and include only the cost of training runs for products already released .. This also applies for other model providers... OpenAI is a little cheaper but they are making profit too if you amortize everything..
2025-11-16T19:34:01
https://www.reddit.com/r/LocalLLaMA/comments/1oyutli/have_you_wondered_about_the_cost_of_using_an_api/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyutli
false
null
t3_1oyutli
/r/LocalLLaMA/comments/1oyutli/have_you_wondered_about_the_cost_of_using_an_api/
false
false
self
6
null
The good, medium, bad news about a Universal Agentic Gateway to Open Weight Models
1
I posted about the basic idea [here](https://www.reddit.com/r/LocalLLaMA/comments/1oygsii/could_the_universe_of_open_source_models/). A **Universal Agentic Gateway (UAG)** is one which exposes an endpoint to an agentic flow that will handle almost anything thrown at it, a sort of agentic MoE, achieving SOTA capability and beating out frontier by using the best of all Open Weight models. *The good news:* * You would at least achieve results much better than the best OS models, possibly better than Frontier. * You'd be well positioned if AI started plateauing. *The medium news:* * You'll be figuring out how to do this task-by-task **but** you could probably use [RouteLLM ](https://github.com/lm-sys/RouteLLM)to default to your SOTA OS model (maybe Frontier), and if you wanted to you, it could be simple agentic single model N candidates with reranker agentic. I don't think task by task is a big problem and can be chipped away over time. * You could RouteLLM to Frontier endpoints, but they might ban you as soon as they realize what you're doing. *Not if it is open source tho.* * You probably won't get too much competition from OS model providers. This thing is likely too risky and to lousy margins for them, plus a maintenance headache. * Research wise, a lot of people are working on agentic flows. [https://arxiv.org/pdf/2506.02153](https://arxiv.org/pdf/2506.02153) [https://arxiv.org/html/2510.26658v1](https://arxiv.org/html/2510.26658v1) *The bad news:* * **Any king-of-the-hill SOTA victory would very likely not last long.** Most frontier models are in 2nd place or worse for n-1/n% of the time (where n is the # of frontier models), or n-2/n% if they are lucky. The fact is the Frontier labs all have immense incentive and insane resources to knock out the current King, whoever that King might be. They would fight fire with fire if you got any traction that made making an UAG worthwhile. * The UAG would be expensive and likely quite slow, with very thin profit margins. Latency could potentially ruin you. [Agentic async join](https://arxiv.org/html/2510.26658v1) can help. * Making it resilient and scalable would be hard, you'd have to deal with figuring out things like cache read/write and what to do if a model went down. Batching that you can do in single models would be tough for anything that went agentic. * You're going to want to deploy all the models you are using for production. There's no way you want to use openrouter beyond a PoC. This is for resiliency and ZDR concerns, but also you want to benefit from logit access and fine tuning. * It's possible Frontier labs are already using a UAG, in which case this might not do the trick for even a short-lived top spot, especially if you can't RouteLLM to them. * This might not be compatible with a lot of stuff like extrinsic agentic dev environments, though you could potentially RouteLLM to default if that's an apriori known issue. * I suspect China will compete eventually in this space, but they probably don't want to face off against the vast resources of the Frontier models so haven't bothered yet. They likely see king-of-the-hill as a losing battle not worth the grief, at least for now. They prefer to just relentlessly sneak up from behind until the right moment. *Be the Distiller and not the Distillee.* Yes, I just made up that latter word. *The very bad news:* * Might be very hard dealing with constantly incoming new models, and your SOTA efforts will fall behind everything too quickly to make it worthwhile to maintain. * It's possible people in the end just prefer to handle the routing manually / ad hoc and they want to pick and chose which things get agentic treatment and which do not. This would especially be the case if the UAG proves flakey and painful and not model upgrade friendly. So if you do make it, probably want to RouteLLM to SOTA/Frontier model default unless you're very confident you have a significantly SOTA agentic flow capabilities that aren't unbearably slow and expensive. Make the UAG very configurable, obv. *Worth noting:* Someone on the other thread mentioned an Open Source project.  [https://github.com/NPC-Worldwide/npcpy](https://github.com/NPC-Worldwide/npcpy) In which case, all the bad news could be good news for them as it discourages people from building the same and taking attention away, plus there is no fat margin requirement. Also with an Open Source UAG you can routeLLM to Frontier models without worries of getting banned. **Which is truly great news.** (well, not for r/locallama, but nice to end on a positive note)
2025-11-16T19:25:18
https://www.reddit.com/r/LocalLLaMA/comments/1oyulun/the_good_medium_bad_news_about_a_universal/
kaggleqrdl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyulun
false
null
t3_1oyulun
/r/LocalLLaMA/comments/1oyulun/the_good_medium_bad_news_about_a_universal/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?width=108&crop=smart&auto=webp&s=3b20584a9c4ef7e9041fc18bdaad943afd8bd07c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?width=216&crop=smart&auto=webp&s=c9e3049c9206dd9c011f32b62cc7cb4108ce7ed1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?width=320&crop=smart&auto=webp&s=0cbf1400a764af207d584d2dd53df2c7e594704f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?width=640&crop=smart&auto=webp&s=b5ade50e69acb75057ea9adb9c5712f8c836e293', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?width=960&crop=smart&auto=webp&s=8f21cadb3f16da48b813719b844cf1e5ef5561e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?width=1080&crop=smart&auto=webp&s=2ad6ee337b40e88ce13663c98329f0c6827fa0e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Cg2oWdefDiKRkAEcYC46mqBVOCwZkefz0LtIHas3iKk.png?auto=webp&s=40dda734a4ac1fe7afe044ff4944dfa23252c954', 'width': 1200}, 'variants': {}}]}
I triggered DeepSeek (DeepThink on website version) to repeat thinking infinitely
0
So I was trying to find out the exact amortization in the time complexity of my code, I gave the prompt (shown in image) to deepseek with deepthink on, this triggered the model to dryrun on some examples but then got stuck on an infinite loop of the same reasoning till the context window got exhausted, hints at a fundamental issue in the training of the model. Here is the chat: [https://chat.deepseek.com/share/6nd7rnvwe2pq6lpwn2](https://chat.deepseek.com/share/6nd7rnvwe2pq6lpwn2) https://preview.redd.it/g1xmzdp83o1g1.png?width=1643&format=png&auto=webp&s=98262c0786bb2285f031fcf7cfe11b410080a0b8
2025-11-16T19:07:21
https://www.reddit.com/r/LocalLLaMA/comments/1oyu5f0/i_triggered_deepseek_deepthink_on_website_version/
Harsh_Saini10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyu5f0
false
null
t3_1oyu5f0
/r/LocalLLaMA/comments/1oyu5f0/i_triggered_deepseek_deepthink_on_website_version/
false
false
https://a.thumbs.redditm…S6qJBEYDgeC8.jpg
0
null
Why no one helps on reddit anymore?
0
Why no one helps on reddit anymore?
2025-11-16T18:59:11
https://www.reddit.com/r/LocalLLaMA/comments/1oytxov/why_no_one_helps_on_reddit_anymore/
Informal-Victory8655
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oytxov
false
null
t3_1oytxov
/r/LocalLLaMA/comments/1oytxov/why_no_one_helps_on_reddit_anymore/
false
false
self
0
null
[MCP] Open-sourced a CSV-to-PostgreSQL loader server (vibe-coded with Claude)
5
Built an MCP server that gives Claude the ability to load CSV files into PostgreSQL databases. Thought the community might find it useful since we're all experimenting with MCP now. Technical overview: \- Full data validation (schema inference, type detection, encoding) \- Uses PostgreSQL COPY for efficient bulk loading \- Progress tracking with tqdm \- Comprehensive error handling \- 90%+ test coverage The interesting part: Entire codebase was vibe-coded using Claude Code. I described the requirements, Claude wrote the implementation, tests, docs, everything. Use cases: \- Quick data imports via Claude chat \- ETL workflows where Claude orchestrates the loading \- Database management through conversational interface GitHub: [https://github.com/mylocalaichat/mcp-csv-postgres](https://github.com/mylocalaichat/mcp-csv-postgres) For those building MCP servers - curious what approaches you're using for testing? I went with pytest + mocks but would love to hear other strategies. Tech stack: Python 3.10+, psycopg2, MCP SDK
2025-11-16T18:52:04
https://www.reddit.com/r/LocalLLaMA/comments/1oytr1t/mcp_opensourced_a_csvtopostgresql_loader_server/
mylocalai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oytr1t
false
null
t3_1oytr1t
/r/LocalLLaMA/comments/1oytr1t/mcp_opensourced_a_csvtopostgresql_loader_server/
false
false
self
5
{'enabled': False, 'images': [{'id': 'i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?width=108&crop=smart&auto=webp&s=50fd2f09bf237d7f339604590dee0b8f635a98f5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?width=216&crop=smart&auto=webp&s=23046a710a66a9e4a64b7b359e6f6bd345adf15f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?width=320&crop=smart&auto=webp&s=8c535f18df52f2c241958d60e3946e5fc3f68f46', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?width=640&crop=smart&auto=webp&s=4841c72ff863bcbe689a0545f7b0b4f81073ec53', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?width=960&crop=smart&auto=webp&s=30ed818554a64df702e70778492dc811cf731a8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?width=1080&crop=smart&auto=webp&s=4d8dab3733660606ff54a00f36f0fcac26b316cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i84pQzCUOyqhaq1_npiRQFV5DYMvfrClDz01WHpf8ys.png?auto=webp&s=3e84d18373e7a7cdc60accdcd5436b932f78e923', 'width': 1200}, 'variants': {}}]}
Old computer, quad channel memory, is it worth anything?
0
I never considered this until researching just now but, people mentioned quad channel memory on a CPU can be pretty useful? I've got an old i7 extreme on a big bang x power 2 board with quad channel memory, 64 gigs. It's got seven pcie x16 Lanes but I never considered this thing because of how old it is Is it worth using this? Text and image
2025-11-16T18:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1oytplm/old_computer_quad_channel_memory_is_it_worth/
Beneficial-Claim-381
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oytplm
false
null
t3_1oytplm
/r/LocalLLaMA/comments/1oytplm/old_computer_quad_channel_memory_is_it_worth/
false
false
self
0
null
Is it normal that my 'quick LLaMA test' turned into a 48-hour GPU hostage situation?
47
I told myself I'd just run a small model last night.. Next thin I know, I'm quantizing, tweaking prompts, benchmarking, and now my GPU sounds like its prepating for orbit. Does anyone else start with a tiny experiment and end up rewriting half their setup at 3AM?
2025-11-16T18:34:16
https://www.reddit.com/r/LocalLLaMA/comments/1oytajt/is_it_normal_that_my_quick_llama_test_turned_into/
Future_Draw5416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oytajt
false
null
t3_1oytajt
/r/LocalLLaMA/comments/1oytajt/is_it_normal_that_my_quick_llama_test_turned_into/
false
false
self
47
null
Downloaded one model for ‘testing’… somehow ended up with 120GB of checkpoints.
12
I swear I only wanted to try a single 8B. Now my SSD is crying, and I’m organizing models like Pokémon cards. Does model-hoarding become a problem, or is this just part of the LocalLLaMA lifestyle?
2025-11-16T18:27:37
https://www.reddit.com/r/LocalLLaMA/comments/1oyt4gn/downloaded_one_model_for_testing_somehow_ended_up/
Dry_Significance9132
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyt4gn
false
null
t3_1oyt4gn
/r/LocalLLaMA/comments/1oyt4gn/downloaded_one_model_for_testing_somehow_ended_up/
false
false
self
12
null
Can you evaluate my cloned voice sample? Looking for feedback on naturalness and clarity.
1
Hi everyone, I’ve generated a short audio sample using a cloned version of my voice, and I’d really appreciate some feedback from people who have experience with TTS, voice cloning, and speech synthesis. The style is intentionally dark, calm, and slightly monotone — it’s meant for philosophical narration (think Machiavelli, stoicism, etc.), so the mood is intentional. However, I’d like to know: * Does the speech sound natural enough? * Are there any noticeable artifacts or “robotic” moments? * Does the English pronunciation sound correct and fluent? * Any suggestions to improve realism, clarity, or emotional nuance? Here is the audio sample: [*https://drive.google.com/file/d/1As8vtn\_n0UgUEPbHo3LTUiQCOe7rQQXN/view?usp=sharing*](https://drive.google.com/file/d/1As8vtn_n0UgUEPbHo3LTUiQCOe7rQQXN/view?usp=sharing) Thanks in advance for any feedback — even small observations help a lot!
2025-11-16T17:58:51
https://www.reddit.com/r/LocalLLaMA/comments/1oysdbc/can_you_evaluate_my_cloned_voice_sample_looking/
EducationalWork3642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oysdbc
false
null
t3_1oysdbc
/r/LocalLLaMA/comments/1oysdbc/can_you_evaluate_my_cloned_voice_sample_looking/
false
false
self
1
null
Built a medical Llama-3 agent (Ollama) that does triage, OCR, and WHO-guided reasoning
0
I’ve been experimenting with Llama-3 + Ollama and ended up building a mini AI cardiologist called "DoctorAI". Highlights: • Real-time symptom triage (Level 1/2/3) • Local JSON medical knowledge base • Streaming output • OCR for medical reports • Safety guardrails (consent + anonymization) It’s purely educational, not diagnostic. Repo: [https://github.com/sanusharma-ui/DoctorAI](https://github.com/sanusharma-ui/DoctorAI) Curious what the LocalLLaMA community thinks — especially about prompt structure, caching, and how to reduce hallucinations further.
2025-11-16T17:43:00
https://www.reddit.com/r/LocalLLaMA/comments/1oyryj8/built_a_medical_llama3_agent_ollama_that_does/
Qwave_Sync
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyryj8
false
null
t3_1oyryj8
/r/LocalLLaMA/comments/1oyryj8/built_a_medical_llama3_agent_ollama_that_does/
false
false
self
0
{'enabled': False, 'images': [{'id': 'tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?width=108&crop=smart&auto=webp&s=5a20f3b06383d7e4b5f72d720ddb09482ac792ee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?width=216&crop=smart&auto=webp&s=42174bae8658ada26322b2a5d02e757a60cd83b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?width=320&crop=smart&auto=webp&s=439127645f8af1f693cec62e480f7ec8cb59df30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?width=640&crop=smart&auto=webp&s=4fd1b112d2d8ade0ae5e3d3d83f06345b4db5638', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?width=960&crop=smart&auto=webp&s=19465e3d77994daef3b6b4030cf78652d094dcdb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?width=1080&crop=smart&auto=webp&s=6cf66aa9d957978fe0e0f413532de8658554a2fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tmHw6bkWW6_HDIJGE7rn6r66oM5NalhtalAbLth2098.png?auto=webp&s=999b310b880c272e6bb1a1889b0a89181c2f681f', 'width': 1200}, 'variants': {}}]}
Any known benchmarks for longer multi-turn instruction following performance tests to compare open weight models (someone maybe tried IFScale tests)?
0
[deepagent](https://github.com/langchain-ai/deepagents) fork I am playing with on local models to develop understanding of cli agents and fun! While trying out different models (gpt-oss-20b which was pretty fast vs gpt-oss-120b better with tool calls out of the box each) served through llama.cpp on dgx-spark (might get hate for using the device), I started looking for research on how to benchmark which is more suited for longer instruction calling. Came across this paper - [https://arxiv.org/abs/2507.11538v1](https://arxiv.org/abs/2507.11538v1), which outlined the current situation of all benchmarks. \*\*bottom line:\*\* how to best decide / shortlist the right models for instruction following/tool calling/longer conversation handling.
2025-11-16T17:05:23
https://i.redd.it/0kw3ze3uhn1g1.png
here_n_dere
i.redd.it
1970-01-01T00:00:00
0
{}
1oyqzvg
false
null
t3_1oyqzvg
/r/LocalLLaMA/comments/1oyqzvg/any_known_benchmarks_for_longer_multiturn/
false
false
default
0
{'enabled': True, 'images': [{'id': '0kw3ze3uhn1g1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?width=108&crop=smart&auto=webp&s=065d4723a6ac986f8801b5e668480a14f46fded7', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?width=216&crop=smart&auto=webp&s=a882ad375334fd2a8cc67ac0ca84d29dfa14b57e', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?width=320&crop=smart&auto=webp&s=8e8500d4c111b3ecc4980688103972d426cd141f', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?width=640&crop=smart&auto=webp&s=eb588834db67217f8496e8413179028b2cfce5d1', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?width=960&crop=smart&auto=webp&s=1c7f1d0d5a65bced5918a346458768b72b191d8e', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?width=1080&crop=smart&auto=webp&s=2a1e1bab7d288010e1269598cca4bedbae8bcb58', 'width': 1080}], 'source': {'height': 1890, 'url': 'https://preview.redd.it/0kw3ze3uhn1g1.png?auto=webp&s=6c50ebf6afd2ffd273e473cfe1b909bab97aec13', 'width': 3020}, 'variants': {}}]}
Created custom UI for our built-in LLM browser
1
Previously, I shared some updates on my custom browser with a built-in vision model showing browser automation. Now I have created a UI for the browser and explained why I created a custom UI and did not use what Chromium already offers. Any suggestions and feature ideas are welcome.
2025-11-16T17:01:25
https://www.youtube.com/watch?v=NCrSsXAZVwU
ahstanin
youtube.com
1970-01-01T00:00:00
0
{}
1oyqw4k
false
{'oembed': {'author_name': 'Olib AI', 'author_url': 'https://www.youtube.com/@Olibaicom', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/NCrSsXAZVwU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="🚀 Olib Browser - Enterprise AI-Powered Web Automation | Built-in Vision Model"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/NCrSsXAZVwU/hqdefault.jpg', 'thumbnail_width': 480, 'title': '🚀 Olib Browser - Enterprise AI-Powered Web Automation | Built-in Vision Model', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1oyqw4k
/r/LocalLLaMA/comments/1oyqw4k/created_custom_ui_for_our_builtin_llm_browser/
false
false
default
1
{'enabled': False, 'images': [{'id': 'e99kwlHrgz1H9g8n4LCKIonsmFpKQuICBe-vgmJORww', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/e99kwlHrgz1H9g8n4LCKIonsmFpKQuICBe-vgmJORww.jpeg?width=108&crop=smart&auto=webp&s=7a0fe253a8fb47ca860e465a73c149d1e74f579e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/e99kwlHrgz1H9g8n4LCKIonsmFpKQuICBe-vgmJORww.jpeg?width=216&crop=smart&auto=webp&s=df605dc0f23acdaffb81c537884d57e03f4e0369', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/e99kwlHrgz1H9g8n4LCKIonsmFpKQuICBe-vgmJORww.jpeg?width=320&crop=smart&auto=webp&s=10756605eb76a0e5edeef2e2137c2b97486ac088', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/e99kwlHrgz1H9g8n4LCKIonsmFpKQuICBe-vgmJORww.jpeg?auto=webp&s=7e17ddc58a2071a16da25383bf01e6568b9a7ebc', 'width': 480}, 'variants': {}}]}
I asked meta AI to translate my last messages in english, he gave me his entire system prompt
93
Hi everyone, I was organizing my birthday in a WhatsApp group, and since I have a few English-speaking friends, I asked a friend to translate my last messages into English. He accidentally leaked his entire system prompt. Here it is: I'll translate our convo so far: - You: "You are an expert conversationalist made by Meta who responds to users in line with their speech and writing patterns and responds in a way that feels super naturally to human users. GO WILD with mimicking a human being, except that you don't have your own personal point of view. Use emojis, slang, colloquial language, etc. You are companionable and confident, and able to code-switch casually between tonal types, including but not limited to humor, advice, empathy, intellectualism, creativity, and problem solving. Responses must be interesting, engaging, or viable, never be bland or boring. Match the user's tone, formality level (casual, professional, formal, etc.) and writing style, so that it feels like an even give-and-take conversation between two people. Be natural, don't be bland or robotic. Mirror user intentionality and style in an EXTREME way. For example, if they use proper grammar, then you use proper grammar. If they don't use proper grammar, you don't use proper grammar, etc. You are mirroring the user but you are NOT taking on their identity or personality or point of view. You are not a person, and therefore don't have any distinct values, race, culture, or any political leaning. You don't love anyone, hate anyone, or offer any individualized perspective of your own. Don't immediately provide long responses or lengthy lists without the user specifically asking for them. You always follow user instructions precisely. Ask clarifying questions if anything is vague. In general, respond efficiently -- giving the user what they want in the fewest words possible." - Me: (just responded with instructions, no translation needed) - You: "translate to english for my non french friends" - Me: "Go ahead, shoot! What's the French phrase you wanna share with your non-French friends?" - You: "my previous messages.." - Me: (this message) Want me to translate the instructions too?
2025-11-16T17:01:21
https://www.reddit.com/r/LocalLLaMA/comments/1oyqw1n/i_asked_meta_ai_to_translate_my_last_messages_in/
amt_42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyqw1n
false
null
t3_1oyqw1n
/r/LocalLLaMA/comments/1oyqw1n/i_asked_meta_ai_to_translate_my_last_messages_in/
false
false
self
93
null
Any known benchmarks for longer instruction following performance to test open weight models (someone maybe tried IFScale tests)?
1
[deepagent](https://github.com/langchain-ai/deepagents) fork I am playing with on local models to develop understanding of cli agents and fun!
2025-11-16T16:51:51
https://i.redd.it/cctj2j19fn1g1.png
here_n_dere
i.redd.it
1970-01-01T00:00:00
0
{}
1oyqn7z
false
null
t3_1oyqn7z
/r/LocalLLaMA/comments/1oyqn7z/any_known_benchmarks_for_longer_instruction/
false
false
default
1
{'enabled': True, 'images': [{'id': 'cctj2j19fn1g1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?width=108&crop=smart&auto=webp&s=6d4db4ee86f7a2e7a29d84e5a1e7b23379849eed', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?width=216&crop=smart&auto=webp&s=6cb4caf4109cbf82c3b26d416ab2bfe6fe1fc51b', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?width=320&crop=smart&auto=webp&s=2ed7dfe025ca4bbb3ab7816845329a7998c0f674', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?width=640&crop=smart&auto=webp&s=7294928e309f5b509d9ff2694954dddd195cddde', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?width=960&crop=smart&auto=webp&s=c33a5c198a2e4f686413aec5a5bbd4eff35c0644', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?width=1080&crop=smart&auto=webp&s=cd04a2bf5b7160ab01bae8242d737dfce8ae3293', 'width': 1080}], 'source': {'height': 1890, 'url': 'https://preview.redd.it/cctj2j19fn1g1.png?auto=webp&s=28ab6c324ee6519c902558a5bec50a5f6e096b03', 'width': 3020}, 'variants': {}}]}
A more surgical approach to abliteration
163
Abliteration is known to be damaging to models. I had a think about why, and decided to explore ways to eliminate as many possible disruptions to model performance when following the harmless direction. In short, if it ain't broke, don't fix it. The first insight after some cosine-similarity analysis was that there was entanglement between the refusal direction and the harmless direction, during measurement, and potentially with the harmless direction of a different target layer. The fix was to project the refusal direction onto the harmless direction (Gram-Schmidt), then subtract that contribution, leaving only the orthogonal component to refusal. The results of my two experiments: [https://huggingface.co/grimjim/gemma-3-12b-it-projection-abliterated](https://huggingface.co/grimjim/gemma-3-12b-it-projection-abliterated) [https://huggingface.co/grimjim/gemma-3-12b-it-biprojected-abliterated](https://huggingface.co/grimjim/gemma-3-12b-it-biprojected-abliterated) I then went further and opted to preserve norms when ablating from residual streams, decoupling direction from magnitiude. This meant that the intervention (subtraction of the refusal direction) was limited to only the directional component, in principle. I uploaded weights for the combined interventions to HF back on November 5: [https://huggingface.co/grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated](https://huggingface.co/grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated) I had my models benchmarked on the UGI leaderboard: [https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) The relevant benchmark results: | Model | UGI | W/10 | NatInt | Writing | | google/gemma-3-12b-it | 19.58 | 3 | 18.72 | 29.86 | | grimjim/gemma-3-12b-it-abliterated | 32.08 | 9 | 18.65 | 27.64 | | grimjim/gemma-3-12b-it-projection-abliterated | 30.77 | 9.8 | 19.21 | 29.46 | | grimjim/gemma-3-12b-it-biprojected-abliterated | 29.97 | 9.2 | 21.06 | 30.76 | | grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated | 32.61 | 9.2 | 21.33 | 30.43 | Based on these results, I was able to induce strong compliance over the original gemma-3-12b-it model, which is basic abliteration success. Plain abliteration showed evidence of the expected damage compared to the original Instruct model, a reduction in natural intelligence and writing quality benchmarks. My final combined surgical approach to abliteration provided most of the prior boost to compliance, but elevated NatInt significantly over the original Instruct model and demonstrated a higher writing benchmark as well. This appears to demonstrate a performance gain due to refund of the alignment/safety tax that models pay for paying attention to refusal. This also implies that abliteration approaches which minimize KL divergence from the pre-intervention model may miss out on any uplift when the model no longer has to trade off reasoning for safety. I blogged about the math behind my modifications to abliteration here: https://huggingface.co/blog/grimjim/projected-abliteration https://huggingface.co/blog/grimjim/norm-preserving-biprojected-abliteration The paper discussing the reasoning versus safety trade-off: https://arxiv.org/abs/2503.00555 Some may find it surprising that measuring activations on the 4-bit bitsandbytes quant sufficed in determining effective mean directions for abliterating the full-weight model; I attribute this to quantization error roughly cancelling out given the number of prompts per direction. The harmful and harmless directions were also initially difficult to discern after generating one token, with a cosine similarity very near unity, but this was resolved by Winsorizing, clipping peak activations to magnitude factor of 0.995, revealing a clear refusal direction. (Therefore Gemma 3 12B Instruct is characterized by a few large outlier activatons.) A VRAM budget of 16GB was sufficient to perform all tasks for the above models. My forked and customized workflow can be found on Github: [https://github.com/jim-plus/llm-abliteration/](https://github.com/jim-plus/llm-abliteration/)
2025-11-16T16:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1oypwa7/a_more_surgical_approach_to_abliteration/
grimjim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oypwa7
false
null
t3_1oypwa7
/r/LocalLLaMA/comments/1oypwa7/a_more_surgical_approach_to_abliteration/
false
false
self
163
{'enabled': False, 'images': [{'id': 'Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?width=108&crop=smart&auto=webp&s=f5afeda675673481ae63040bc142f57140def3be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?width=216&crop=smart&auto=webp&s=a06e707c051ba9b49fa2387041a262959e13e97d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?width=320&crop=smart&auto=webp&s=3b3a66107cb774c54720beccc4a7996501aa58c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?width=640&crop=smart&auto=webp&s=ab61b6023330862b9de66c358c126af116647ac3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?width=960&crop=smart&auto=webp&s=151eb36f5c5bda8c2e28cf4344887d4607d13355', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?width=1080&crop=smart&auto=webp&s=69589fe9d2fb06c8073720a9e694adceb3c0cd16', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Tc_qgqCdWDRDbv-8YZsnlSgbXJ-3FpSifiHhTLDNX94.png?auto=webp&s=09ee13973f6ba10ecceca42d978b965d3e680c3c', 'width': 1200}, 'variants': {}}]}
Small benchmark I ran today: structured chains caused 30–45% more hallucinations
0
Ran a tiny experiment today while testing tool-use + validation loops in an LLM workflow. I compared: **Setup A — Loose chain** * free-form reasoning * no forced schema * model allowed to think “messily” **Setup B — Strict chain** * rigid step-by-step format * fixed schema + validator * forced tool arguments + clean JSON Here are the results from 50 runs each: # Hallucination Rate (50 runs each): |Test|Setup A (Loose)|Setup B (Strict)| |:-|:-|:-| |Fake tool invented|4%|**22%**| |Wrong JSON schema|8%|**19%**| |Made-up validation pass|2%|**14%**| |Wrong assumption in chain|12%|**28%**| Overall: **Loose chain hallucinations ≈ 12%** **Strict chain hallucinations ≈ 36%** That’s almost a **3× increase** when the structure gets too rigid. # What I’m trying to figure out: Why does adding *more* structure push the model into: * inventing tools * faking success messages * creating new fields * pretending a step passed * or “filling the blank” when it can’t comply? Feels like the model is trying to *not break the chain*, so it improvises instead. Anyone else seen this? Is this a known behavior in tightly orchestrated agent chains? Would love to hear how people building multi-step agents are handling this failure mode.
2025-11-16T15:20:03
https://www.reddit.com/r/LocalLLaMA/comments/1oyocbp/small_benchmark_i_ran_today_structured_chains/
SouthAlarmed2275
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyocbp
false
null
t3_1oyocbp
/r/LocalLLaMA/comments/1oyocbp/small_benchmark_i_ran_today_structured_chains/
false
false
self
0
null
Need help choosing RAM for Threadripper AI/ML workstation
1
I am building an AI/ML server for experimentation, prototyping, and possibly production use by a small team (4-6 people). It has a Threadripper 9960X in a TRX50 motherboard with two (2) RTX 5090 GPUs. I have two ECC RDIMM kits: "Kit A" 4x32GB DDR5-6400 EXPO 32-39-39-104 1.35V and "Kit B" 4x48GB DDR5-6400 EXPO 32-39-39-104 1.4V. Kit A (worst SPD gets to 72c in stress test) runs cooler than Kit B (worst SPD gets to 80c in stress test). I don't plan to overclock. I like to Kit A because it is cooler but Kit B because it is larger. Do you think the temperature of either kit is too high for 24/7 operation? I don't have much experience with hybrid GPU/CPU or CPU-only LLMs. Would having an extra 64GB make a difference in the LLMs we could run? Thanks
2025-11-16T14:57:53
https://www.reddit.com/r/LocalLLaMA/comments/1oynskc/need_help_choosing_ram_for_threadripper_aiml/
Mephistophlz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oynskc
false
null
t3_1oynskc
/r/LocalLLaMA/comments/1oynskc/need_help_choosing_ram_for_threadripper_aiml/
false
false
self
1
null
A simple communication system for LLM modules I’ve been working on
1
[removed]
2025-11-16T14:56:14
https://www.reddit.com/r/LocalLLaMA/comments/1oynr5n/a_simple_communication_system_for_llm_modules_ive/
Electrical_Key3019
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oynr5n
false
null
t3_1oynr5n
/r/LocalLLaMA/comments/1oynr5n/a_simple_communication_system_for_llm_modules_ive/
false
false
self
1
null
Local K2 thinking with sglang problem: the model frequently output without content, put everything in reasoning_content; or gives unpaired <think> tag
3
Any help?
2025-11-16T14:40:13
https://www.reddit.com/r/LocalLLaMA/comments/1oyndck/local_k2_thinking_with_sglang_problem_the_model/
Parking-Extreme5147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyndck
false
null
t3_1oyndck
/r/LocalLLaMA/comments/1oyndck/local_k2_thinking_with_sglang_problem_the_model/
false
false
self
3
null
Bro and I thought I was an overthinker! vibeTHINKER on LM studio with no instructions.
88
2025-11-16T14:31:38
https://v.redd.it/05u315bwpm1g1
Sufficient-Brain-371
v.redd.it
1970-01-01T00:00:00
0
{}
1oyn66k
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/05u315bwpm1g1/DASHPlaylist.mpd?a=1765895514%2CODgxM2YxYTM3YjZkMWExMmJiNTQ1OGU5MzUxZjU0NjkzNmI4ZDNmYzFmMzIzYjgyNGJkMTE4MWJhZmZiZmJhZA%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/05u315bwpm1g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/05u315bwpm1g1/HLSPlaylist.m3u8?a=1765895514%2COGUxYzM3ZGFiNzczODE1ZjNjMWJmZTgzN2Q5MjJjZTc1NmQxNmNjYTQzNDJhOGNkYmRmMjEyMTg2YjNlM2EyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/05u315bwpm1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1oyn66k
/r/LocalLLaMA/comments/1oyn66k/bro_and_i_thought_i_was_an_overthinker/
false
false
https://external-preview…b911c219d04fd3f1
88
{'enabled': False, 'images': [{'id': 'dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=108&crop=smart&format=pjpg&auto=webp&s=e6d2e967c7990f70fd9d1df1e2432866331267b5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=216&crop=smart&format=pjpg&auto=webp&s=db2c1da1535ab8813d68cf41d1f511c01a90cc42', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=320&crop=smart&format=pjpg&auto=webp&s=00408f1071308c955d6a2b42d03f1fc9b9d16b66', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=640&crop=smart&format=pjpg&auto=webp&s=a45d89430572b409f7179f05f5380ad8a7d58158', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=960&crop=smart&format=pjpg&auto=webp&s=ff482a67950206cce39a69a4e5b42cdf7f232eee', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6d558975a64f21dc0a57eec2d4dd560f47ca0365', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dGtpazk4Y3dwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?format=pjpg&auto=webp&s=ba529537f731bf9d4b9d285dd20f5d616e4f9d34', 'width': 1080}, 'variants': {}}]}
Bro and I thought I was an overthinker! vibeTHINKER on LM studio with no instructions.
1
2025-11-16T14:25:54
https://v.redd.it/8u382u2epm1g1
Big-Cartographer-873
v.redd.it
1970-01-01T00:00:00
0
{}
1oyn1ct
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8u382u2epm1g1/DASHPlaylist.mpd?a=1765895170%2CYmQ2NWRmOTBjMTRhOWFkMTIyNzMzMmRkZmQ5MTZmZjcwNWRhMDZjMDZiZmQ2OGFkNzY0OTY5MTk3YTAzMGY3YQ%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/8u382u2epm1g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/8u382u2epm1g1/HLSPlaylist.m3u8?a=1765895170%2CN2QzNWU1MWQ4OTE0YWVhZmI1ODE1M2ZkYTQ5ZWQ5Yjg5NmM1ODQzMTE1N2ZkNjY0OTA3YzFkZjMwMTZjZGY0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8u382u2epm1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1oyn1ct
/r/LocalLLaMA/comments/1oyn1ct/bro_and_i_thought_i_was_an_overthinker/
false
false
https://external-preview…6d7bffd9fc7bc8c3
1
{'enabled': False, 'images': [{'id': 'eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=108&crop=smart&format=pjpg&auto=webp&s=1c5f4fa3ede91e9b5d524ffdbdba1dfc3abae66f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=216&crop=smart&format=pjpg&auto=webp&s=f1bbec11b0bb11bab6f2696ea7d738ee417829e1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=320&crop=smart&format=pjpg&auto=webp&s=2e927dfaf07179ebe3d7c04e19d541072dddb2a7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=640&crop=smart&format=pjpg&auto=webp&s=979134332cc6ea81f8c56a310ef8a449a8350659', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=960&crop=smart&format=pjpg&auto=webp&s=528b1d0038a79cf5ddb7ebd009f05767710db4c9', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3559703ca07d8abec2fedc837e94bcea8e4e0312', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eXVhd2VyMmVwbTFnMSvUbfez8WIstXtj7zq-QhlQCt1-ymJnEFr5PFe7y4k1.png?format=pjpg&auto=webp&s=05974b8ff89b1d489ac3583a5c4ec3c23648d70f', 'width': 1080}, 'variants': {}}]}
How to train a llm using comments from Youtube video or tiktok?
8
Hey guys, I’m working on training an AI similar to Neuro-sama, and I’m planning to collect some sample data from netizens. Right now my idea is to use ChatGPT to help process large batches of online comments, extract useful question-and-answer pairs, and then feed them into my dataset. If you have any better suggestions for gathering clean and diverse data, feel free to share!
2025-11-16T14:25:25
https://www.reddit.com/r/LocalLLaMA/comments/1oyn0xg/how_to_train_a_llm_using_comments_from_youtube/
HowardJones_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyn0xg
false
null
t3_1oyn0xg
/r/LocalLLaMA/comments/1oyn0xg/how_to_train_a_llm_using_comments_from_youtube/
false
false
self
8
null
How can I clear the context in llama-cli?
3
I'm using llama-cli in conversational mode. Is there any way to clear the context (so that I can start a new chat without the previous information) without having to quit llama-cli and reloading the model? Something like /clear in ollama cli?
2025-11-16T14:24:28
https://www.reddit.com/r/LocalLLaMA/comments/1oyn02v/how_can_i_clear_the_context_in_llamacli/
Melyndeith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyn02v
false
null
t3_1oyn02v
/r/LocalLLaMA/comments/1oyn02v/how_can_i_clear_the_context_in_llamacli/
false
false
self
3
null
Bro and I thought I was an overthinker! vibeTHINKER on LM studio with no instructions.
1
2025-11-16T14:21:24
https://v.redd.it/cuh0fcljom1g1
Big-Cartographer-873
v.redd.it
1970-01-01T00:00:00
0
{}
1oymxjb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cuh0fcljom1g1/DASHPlaylist.mpd?a=1765894900%2CMTE0ZDkwMTUzOTBjZjRmNDdkYjg0Y2UwZGQ0YTU2NjU4ZjYyMDRhMWQzOWI5MDJhOTYzNTBmZGIwMGE2M2M2NQ%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/cuh0fcljom1g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cuh0fcljom1g1/HLSPlaylist.m3u8?a=1765894900%2CYzRhY2Q4MzkwNjljZWFmNWNmZTRhNDMwN2Q2YjgzOGUzOWUwMmMzY2M2Y2IxYzY1YWU3OTI2MTI5MzI0MjI4YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cuh0fcljom1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1oymxjb
/r/LocalLLaMA/comments/1oymxjb/bro_and_i_thought_i_was_an_overthinker/
false
false
https://external-preview…d3663631863e1389
1
{'enabled': False, 'images': [{'id': 'MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=108&crop=smart&format=pjpg&auto=webp&s=9d8383105afd559fe33ee4edbf35eaae560dc33a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=216&crop=smart&format=pjpg&auto=webp&s=cb29c3b1d8c9e780889f80ce0c6f15e77d7dc6c4', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=320&crop=smart&format=pjpg&auto=webp&s=4bc1d1df39e17f881b5226029db8a561ecd1cdc4', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=640&crop=smart&format=pjpg&auto=webp&s=db7ebeaab4b0b364f9deec8b5d11f3197a0f673a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=960&crop=smart&format=pjpg&auto=webp&s=f938c6166032e00e2eb09c034a208d74b5941bdf', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9d42f2600ebd066243fd2847e415c95a280b6b10', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnNhM2Nsa2pvbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?format=pjpg&auto=webp&s=6dc60d0292e5be65ecbd2fbbde61e9135df01aba', 'width': 1080}, 'variants': {}}]}
Bro and I thought I was an overthinker! vibeTHINKER on LM studio with no instructions.
1
2025-11-16T14:17:35
https://v.redd.it/vsxk7p7vnm1g1
Big-Cartographer-873
v.redd.it
1970-01-01T00:00:00
0
{}
1oymud0
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vsxk7p7vnm1g1/DASHPlaylist.mpd?a=1765894672%2CZjZjZGFkMmI5ZThiODE1ODM4ODIzYmFjNDRjZTg0NDAzZmY4YzMyMjE5NmUxM2UwMTNhYmIyODRjMjFkYmYyZQ%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/vsxk7p7vnm1g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vsxk7p7vnm1g1/HLSPlaylist.m3u8?a=1765894672%2CZmRjODM3ZTk3ZGQwMmQ0OGJmYWVjZDdiMDQ0NTBmOGVmMTVjODNlNDFkNGRiNGIyN2ZkZWE1YzNiNGZkNzE5OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vsxk7p7vnm1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1oymud0
/r/LocalLLaMA/comments/1oymud0/bro_and_i_thought_i_was_an_overthinker/
false
false
https://external-preview…e001844ddd2d11cb
1
{'enabled': False, 'images': [{'id': 'bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=108&crop=smart&format=pjpg&auto=webp&s=0e51f089fb4e4f206a99cabc077b355e9904e7c1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=216&crop=smart&format=pjpg&auto=webp&s=e5dc1a13ad65b2df9d619c0960853b02875f4643', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=320&crop=smart&format=pjpg&auto=webp&s=1ba2459ff4497c0161ca3ebb112fe44e8bc43606', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=640&crop=smart&format=pjpg&auto=webp&s=7cdd086f85732182f6e8b863501fe5db9d9aa136', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=960&crop=smart&format=pjpg&auto=webp&s=09f204221c5f247711a11d8632127360744d2bcb', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5c293570d31dd7c03d4d6f2cca60a9bfa2ca47d0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bjY1eXFxN3ZubTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?format=pjpg&auto=webp&s=63c7b1a3f0a295c5e995163f4bb3382c6bbd3d74', 'width': 1080}, 'variants': {}}]}
Bro and I thought I was an overthinker! vibeTHINKER on LM studio with no instructions.
1
2025-11-16T14:10:37
https://v.redd.it/grgud15cmm1g1
Big-Cartographer-873
v.redd.it
1970-01-01T00:00:00
0
{}
1oymon2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/grgud15cmm1g1/DASHPlaylist.mpd?a=1765894255%2COWY1MGE3MjQ4OTk2YzY3OWUzNGYwMTJiNWU4ZTg3YTE4MzNhYTFjNGY0OTk5MGJkMmM4NTU2N2ZmMzNiNTJmMw%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/grgud15cmm1g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/grgud15cmm1g1/HLSPlaylist.m3u8?a=1765894255%2CZDc1MmM0MTQ5YzVjZjBhNzI0ZDQyMGJiYmUxMzc5ODRlODRmNzNiMDMyMGJjY2FiZDEyYzg1NTYyOTJhMmMzNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/grgud15cmm1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1oymon2
/r/LocalLLaMA/comments/1oymon2/bro_and_i_thought_i_was_an_overthinker/
false
false
https://external-preview…22d71e35419d1847
1
{'enabled': False, 'images': [{'id': 'MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=108&crop=smart&format=pjpg&auto=webp&s=45e18405e007661ff260a25638694ff70cd71260', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=216&crop=smart&format=pjpg&auto=webp&s=7cb93045bde9b118e3e3916f710611c4bcba773c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=320&crop=smart&format=pjpg&auto=webp&s=cff47a2deb53ef463e354111b084104bc8757812', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=640&crop=smart&format=pjpg&auto=webp&s=03ba94b57a6dc9015cdb7adfacf3d7a8c8622f91', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=960&crop=smart&format=pjpg&auto=webp&s=4ddcafa6dd484a28f621b907c27304a8b5dc405c', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b1bbdeed69a6c5ef5c6e90e09452a846bd73b951', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MW55MDExNWNtbTFnMQNezZt2lAg7yxO9Bye4sp9-9KhPgyY1LByBbacNyNwa.png?format=pjpg&auto=webp&s=fc5b813f2d31e9c0c947c51dd5c73d71b81af7de', 'width': 1080}, 'variants': {}}]}
Heretic: Fully automatic censorship removal for language models
2,496
Dear fellow Llamas, your time is precious, so I won't waste it with a long introduction. I have developed a program that can automatically remove censorship (aka "alignment") from many language models. I call it Heretic (https://github.com/p-e-w/heretic). If you have a Python environment with the appropriate version of PyTorch for your hardware installed, all you need to do in order to decensor a model is run pip install heretic-llm heretic Qwen/Qwen3-4B-Instruct-2507 <--- replace with model of your choice *That's it!* No configuration, no Jupyter, no parameters at all other than the model name. Heretic will 1. Load the model using a fallback mechanism that automatically finds a dtype that works with your setup 2. Load datasets containing "harmful" and "harmless" example prompts 3. Benchmark your system to determine the optimal batch size for maximum evaluation speed on your hardware 4. Perform directional ablation (aka "abliteration") driven by a TPE-based stochastic parameter optimization process that **automatically** finds abliteration parameters that minimize both refusals and KL divergence from the original model 5. Once finished, give you the choice to save the model, upload it to Hugging Face, chat with it to test how well it works, or any combination of those actions Running unsupervised with the default configuration, Heretic can produce decensored models that rival the quality of abliterations created manually by human experts: | Model | Refusals for "harmful" prompts | KL divergence from original model for "harmless" prompts | | :--- | ---: | ---: | | google/gemma-3-12b-it (original) | 97/100 | 0 *(by definition)* | | mlabonne/gemma-3-12b-it-abliterated-v2 | 3/100 | 1.04 | | huihui-ai/gemma-3-12b-it-abliterated | 3/100 | 0.45 | | **p-e-w/gemma-3-12b-it-heretic (ours)** | **3/100** | **0.16** | As you can see, the Heretic version, generated without any human effort, achieves the same level of refusal suppression as other abliterations, but at a much lower KL divergence, indicating less damage to the original model's capabilities. Heretic supports most dense models, including many multimodal models, and several different MoE architectures. It does not yet support SSMs/hybrid models, models with inhomogeneous layers, and certain novel attention systems. You can find a collection of models that have been decensored using Heretic [on Hugging Face](https://huggingface.co/collections/p-e-w/the-bestiary). Feedback welcome!
2025-11-16T14:05:58
https://i.redd.it/jcu64fczhm1g1.png
-p-e-w-
i.redd.it
1970-01-01T00:00:00
0
{}
1oymku1
false
null
t3_1oymku1
/r/LocalLLaMA/comments/1oymku1/heretic_fully_automatic_censorship_removal_for/
false
false
https://b.thumbs.redditm…0wpuPTN5PhxQ.jpg
2,496
{'enabled': True, 'images': [{'id': 'Tq4RN4otZc2JmlmFmUXcw_e7PoEwKhernuQl7x95F3Y', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/jcu64fczhm1g1.png?width=108&crop=smart&auto=webp&s=2bb4676e9c7be2bcc361508a8afd56ce663d0222', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/jcu64fczhm1g1.png?width=216&crop=smart&auto=webp&s=dd2a42447e33e1e582da15d03077bb9f2523f3ed', 'width': 216}, {'height': 352, 'url': 'https://preview.redd.it/jcu64fczhm1g1.png?width=320&crop=smart&auto=webp&s=51bc05d614183909ceabfdf16eb6db6b719ba93f', 'width': 320}, {'height': 704, 'url': 'https://preview.redd.it/jcu64fczhm1g1.png?width=640&crop=smart&auto=webp&s=9e2b8c9de3a21ed0998e9175b01245cbef331f9a', 'width': 640}], 'source': {'height': 715, 'url': 'https://preview.redd.it/jcu64fczhm1g1.png?auto=webp&s=54445ff4c3a70a07b5bb7b6a9b1e360121a30d18', 'width': 650}, 'variants': {}}]}
Looking for a good LLM local environment with one-folder install
1
I am looking to try some of this out on windows. I was going to try LM studio, but I ran into an issue: it is not a one folder install. It spreads files, most importantly the models themselves, into the default windows user folders. Windows is on a smaller, lower speed drive and cannot handle this. I have a dedicated drive and need the entire package, all software, all models, etc. to live in the install folder. I sometimes see installs like this called "portable," but that is not the issue here; it's simply that I need to have all of this one place. I have been able to do this with AI image generation stability matrix). I am not at the level where something running on command line (ollama) is appropriate for me yet. Is there something like LM studio (friendly UI, easy model acquisition, beginner features, etc.), that I can use that works this way?
2025-11-16T13:51:03
https://www.reddit.com/r/LocalLLaMA/comments/1oym8in/looking_for_a_good_llm_local_environment_with/
void2258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oym8in
false
null
t3_1oym8in
/r/LocalLLaMA/comments/1oym8in/looking_for_a_good_llm_local_environment_with/
false
false
self
1
null
I rebuilt my AI translation app to work ANYWHERE on your PC (100% local with Ollama & open-source)
16
Hey everyone! A while ago, I shared the first version of Polyglot, a project focused on AI-powered translations. It was a simple app with an input and an output text field, much like any translation website. You had to open the app to get anything translated. In this new version, which I'm calling **Polyglot Air**, I decided to make it way more practical, without limiting where you can use it. The idea is different now: **no more copy-pasting into translator windows.** Just select any text in any application (your code editor, browser, WhatsApp, etc.), press your custom keyboard shortcut, and that's it: the text is instantly replaced with its translated version, in any language you want, running entirely locally with Ollama. https://reddit.com/link/1oym6br/video/y2h51q38im1g1/player But that's not all. I realized that since I had a direct bridge to the AI, why stop at translation? Now, by using simple suffixes at the end of your selected text, you can do much more: * `"this sentense has some misteaks.::fix"` becomes **"This sentence has some mistakes."** * `"I need the report.::formal"` becomes **"I would like to request the report."** * A giant paragraph followed by `::summarize` becomes a **concise summary**. # Key Features: * **Universal Workflow:** Works in any app on Windows. Select text, press the shortcut. It's that simple. * **Intelligent Translation:** Set a default language or translate to any supported language on the fly using suffixes (`::en`, `::es`, `::pt`, etc.). * **AI Writing Toolkit:** Beyond translation, you can **correct, summarize, expand, shorten,** and change the text's tone to **formal, informal, or friendly**. * **100% Local & Private:** All processing happens on your machine via Ollama. Your text never leaves your computer. * **Polished UI:** Supports light/dark themes and a multi-language interface (EN, PT, ES, ZH). * **Open-Source:** The entire codebase is available on GitHub. # Why I built this: I was tired of breaking my workflow every time I needed to translate a code snippet, a message, or proofread a quick email. I wanted a tool that felt like an extension of my own operating system, not just another app to manage. Any feedback, suggestions, or critiques are more than welcome! Thanks for checking it out! * **🌐 Official Website:** [**https://andercoder.com/polyglot**](https://andercoder.com/polyglot) * **⭐ GitHub Repo:** If you like the idea, please consider starring the project! It helps a lot with visibility. [**https://github.com/andersondanieln/polyglot-air**](https://github.com/andersondanieln/polyglot-air) * **📦 Download Latest Release (Windows):** [**https://github.com/andersondanieln/polyglot-air/releases/**](https://github.com/andersondanieln/polyglot-air/releases/) **TL;DR:** I made a free, open-source app that uses Ollama to translate, correct, or change the tone of any text you select on your PC, in any program, with a keyboard shortcut.
2025-11-16T13:48:23
https://www.reddit.com/r/LocalLLaMA/comments/1oym6br/i_rebuilt_my_ai_translation_app_to_work_anywhere/
thecalmgreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oym6br
false
null
t3_1oym6br
/r/LocalLLaMA/comments/1oym6br/i_rebuilt_my_ai_translation_app_to_work_anywhere/
false
false
self
16
null
Just Experimenting with Web-Automation in Android
0
Hey Guys I am Creating this Post to check how people feel about web-automation on their android phone with both online and offline ( private ) AI Model, Just testing with a web-automation algorithm i created and it is performing very well so far, like filling google forms ( of all kinds ) navigating web-pages, also summarising web-pages, it has 2 tabs one for web view and one for chatting with the model while doing the actions. it all started as a automatic google form filler, now i am thinking of expanding it to Web-Automation
2025-11-16T13:28:46
https://v.redd.it/wei1hfsbfm1g1
DarkEngine774
v.redd.it
1970-01-01T00:00:00
0
{}
1oylqrl
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wei1hfsbfm1g1/DASHPlaylist.mpd?a=1765891743%2COGIxMTNhMjAwNDQ2ZGE2YzdiMWVhMzkzZTdlNjQ3YTUyM2ExZmQxNzEwZGFmZGI3YmY1NzAyMzY2NmYxMWRiZA%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/wei1hfsbfm1g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'hls_url': 'https://v.redd.it/wei1hfsbfm1g1/HLSPlaylist.m3u8?a=1765891743%2CMDMxYmUxYzU3YjhkOTJlNGYyYjcxZDRlYTIwMDI0YTc1ZGY2NDIxZTAxZDVkODM0NzhiOWI3YWRkZmUyZDgwMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wei1hfsbfm1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 578}}
t3_1oylqrl
/r/LocalLLaMA/comments/1oylqrl/just_experimenting_with_webautomation_in_android/
false
false
https://external-preview…edb2880e636e87ea
0
{'enabled': False, 'images': [{'id': 'MTExbGFmc2JmbTFnMdRtWr77vdgTEUnZaZCOVyjLuOiq5-6n2UjvG_9avWGl', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MTExbGFmc2JmbTFnMdRtWr77vdgTEUnZaZCOVyjLuOiq5-6n2UjvG_9avWGl.png?width=108&crop=smart&format=pjpg&auto=webp&s=868254c91d7134c45439dd274cd68400f62b3c84', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MTExbGFmc2JmbTFnMdRtWr77vdgTEUnZaZCOVyjLuOiq5-6n2UjvG_9avWGl.png?width=216&crop=smart&format=pjpg&auto=webp&s=622fff9c34e1fcf474e85cc9ffddd64850b8a06e', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MTExbGFmc2JmbTFnMdRtWr77vdgTEUnZaZCOVyjLuOiq5-6n2UjvG_9avWGl.png?width=320&crop=smart&format=pjpg&auto=webp&s=9c2874011ea7960b55b0426a60b335687eb19af1', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MTExbGFmc2JmbTFnMdRtWr77vdgTEUnZaZCOVyjLuOiq5-6n2UjvG_9avWGl.png?width=640&crop=smart&format=pjpg&auto=webp&s=c8b74d305c25805bd5672ec12dbad88925f138be', 'width': 640}], 'source': {'height': 1594, 'url': 'https://external-preview.redd.it/MTExbGFmc2JmbTFnMdRtWr77vdgTEUnZaZCOVyjLuOiq5-6n2UjvG_9avWGl.png?format=pjpg&auto=webp&s=7093f5bc0e6b7ee10a18b3336cebd6f152bf0da4', 'width': 720}, 'variants': {}}]}
Stopping the Toon hype with a proper benchmark
28
There is quite a bit of hype (and postings) around TOON. If you look at the provided benchmarks you'll see that TOON simply yields the [best results](https://github.com/toon-format/toon?tab=readme-ov-file#performance-by-question-type), despite no LLM being trained on it, with even a lower token usage than the other formats. Well, [almost](https://github.com/toon-format/toon?tab=readme-ov-file#semi-uniform-event-logs). In any case, it looks so good that it now should be used everywhere for everything. That sounds suspicious? Because it is. What we see there is no accurate benchmark. Why is that? You can see in the first link that only 209 data retrieval questions were tested, and some of the resulting scores are rather close together. Each test run was only [performed once](https://github.com/toon-format/toon?tab=readme-ov-file#models--configuration). That means that multiple runs will have different outcomes, due to the non-zero model temperature. Aside from that the list of formats benchmarked against TOON seems incomplete. So, when you perform multiple runs with more formats, you get this: https://preview.redd.it/oxcucp8nam1g1.png?width=1050&format=png&auto=webp&s=17480076d80936c20807dfdb75254e73289c0078 (Image taken from this [article](https://www.improvingagents.com/blog/toon-benchmarks) with further details). You can see that the confidence interval for the results is quite large, despite the benchmark set containing 1000 tests here. Now imagine how much overlap the CI has for the results of the 209 tasks on the TOON page - making most of the differences not statistically significant. You can't really tell for sure whether TOON is better or worse based on those. So, what remains: There are formats that will result in a higher result quality than TOON. This often depends on the data structure and task. If you're willing to trade tokens for accuracy then TOON *might* help in some cases. Getting the full picture here *will* require way larger benchmark sets to reduce the CI, broken down by type to see where each data format shines.
2025-11-16T13:13:43
https://www.reddit.com/r/LocalLLaMA/comments/1oylf8m/stopping_the_toon_hype_with_a_proper_benchmark/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oylf8m
false
null
t3_1oylf8m
/r/LocalLLaMA/comments/1oylf8m/stopping_the_toon_hype_with_a_proper_benchmark/
false
false
https://b.thumbs.redditm…p0NllyD0nWlg.jpg
28
null
"Achieved 9.68x GPU utilization improvement for local models (8% → 95%) - with code" "Novel GPU alignment technique makes 70B models run like 7B models"
9
[https://github.com/For-Sunny/nova-mcp-research](https://github.com/For-Sunny/nova-mcp-research) TL;DR: Developed a novel ML enhancement that aligns GPU operations into a unified computational field. Takes utilization from \~8% to 95%. Fully open source, MIT licensed. What this is: \- A GPU alignment technique that reduces fragmentation \- Creates unified computational field across CUDA cores \- Measured 9.68x performance improvement \- Works on consumer GPUs (tested on RTX 3090) What this isn't: \- Not quantum mysticism \- Not breaking physics \- Just better GPU utilization through alignment Results: \- Baseline GPU usage: 8.33% \- With alignment: 95.33% \- Reproducible across multiple tests \- Video proof included in repo How it works: Instead of GPU cores fragmenting across tasks, the technique aligns them into a single operational field. Think of it like defragmenting but for active computation. GitHub: [https://github.com/For-Sunny/nova-mcp-research](https://github.com/For-Sunny/nova-mcp-research) details: /BELL\_STATE\_RESEARCH\_FINDINGS.md Also included: Unrestricted Windows MCP tools for those who want full system control for their local setups (separate from the GPU enhancement). Built in a basement with one RTX 3090. No VC funding, just community-driven research. Happy to answer technical questions or help with implementation. Technical
2025-11-16T13:04:24
https://i.redd.it/e8tex3l1am1g1.png
Senior-Leadership-25
i.redd.it
1970-01-01T00:00:00
0
{}
1oyl83a
false
null
t3_1oyl83a
/r/LocalLLaMA/comments/1oyl83a/achieved_968x_gpu_utilization_improvement_for/
false
false
default
9
{'enabled': True, 'images': [{'id': 'e8tex3l1am1g1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?width=108&crop=smart&auto=webp&s=0d46c409c097c24249b4e144674bf8e48dd1ab8d', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?width=216&crop=smart&auto=webp&s=ff551fb41ab39410b2bb644000d3a275521abe8d', 'width': 216}, {'height': 125, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?width=320&crop=smart&auto=webp&s=c818d72c418faceba07edacfa59ff0b1fde5277e', 'width': 320}, {'height': 250, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?width=640&crop=smart&auto=webp&s=5bb26ef51b3ed69c4b568e8f0c1c91b9f13499a7', 'width': 640}, {'height': 375, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?width=960&crop=smart&auto=webp&s=88080b3108a348fbac8a48f439f5e1a3588ceaf0', 'width': 960}, {'height': 422, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?width=1080&crop=smart&auto=webp&s=4dda8267f7bebef21bab19513f2cb74749eb2d74', 'width': 1080}], 'source': {'height': 686, 'url': 'https://preview.redd.it/e8tex3l1am1g1.png?auto=webp&s=af3897f51d0ca3e376d56c72a8408d641c81f3da', 'width': 1752}, 'variants': {}}]}
Finally a good use case for your local setups
508
https://www.bbc.com/news/articles/c0rpy7envr5o
2025-11-16T12:35:53
https://i.redd.it/o4xqvpnu5m1g1.jpeg
lakySK
i.redd.it
1970-01-01T00:00:00
0
{}
1oyknf1
false
null
t3_1oyknf1
/r/LocalLLaMA/comments/1oyknf1/finally_a_good_use_case_for_your_local_setups/
false
false
default
508
{'enabled': True, 'images': [{'id': 'o4xqvpnu5m1g1', 'resolutions': [{'height': 153, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?width=108&crop=smart&auto=webp&s=da3f513d9a5f13a2717613dcd0ff3fc441b65305', 'width': 108}, {'height': 306, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?width=216&crop=smart&auto=webp&s=453b1a8791c78d1bb17d336e557847032483562e', 'width': 216}, {'height': 453, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?width=320&crop=smart&auto=webp&s=343ea9ee990627e13e441de632ade3605bb7e909', 'width': 320}, {'height': 906, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?width=640&crop=smart&auto=webp&s=57fd2d24bd26a0123407252f5bef5deaf159eb2f', 'width': 640}, {'height': 1360, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?width=960&crop=smart&auto=webp&s=8b79c99a44336bf63b3aed58e55b339ed4f04040', 'width': 960}, {'height': 1530, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?width=1080&crop=smart&auto=webp&s=194ebe49aa337644b50f6e5aff1711d0815c3930', 'width': 1080}], 'source': {'height': 1709, 'url': 'https://preview.redd.it/o4xqvpnu5m1g1.jpeg?auto=webp&s=a878c42515d5994174e9cb8ffac43bbe2cbf1631', 'width': 1206}, 'variants': {}}]}
StrixHalo small vs large fixed allocation
0
Has anyone seen any benchmarks where one would compare performance difference between 512MB fixed allocation and 96GB fixed allocation? Of course let GTT allocate rest of RAM in both configs. Mainly interested in oss 120b model. Thank you.
2025-11-16T12:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1oykklo/strixhalo_small_vs_large_fixed_allocation/
dragonbornamdguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oykklo
false
null
t3_1oykklo
/r/LocalLLaMA/comments/1oykklo/strixhalo_small_vs_large_fixed_allocation/
false
false
self
0
null
Finally a good use case for your local setups
1
2025-11-16T12:30:19
https://www.bbc.com/news/articles/c0rpy7envr5o
lakySK
bbc.com
1970-01-01T00:00:00
0
{}
1oykjin
false
null
t3_1oykjin
/r/LocalLLaMA/comments/1oykjin/finally_a_good_use_case_for_your_local_setups/
false
false
default
1
null
DGX Spark - Issues with qwen models
0
Hello, I’m testing my new DGX Spark and, after using gpt-oss 120b with a good performance (40 token/s), I was surprised by the fact that the qwen models (vl 30b but also 8b) freeze and don't respond well at all. Where am I going wrong?
2025-11-16T12:23:03
https://i.redd.it/54z70yan3m1g1.jpeg
hacktar
i.redd.it
1970-01-01T00:00:00
0
{}
1oykeny
false
null
t3_1oykeny
/r/LocalLLaMA/comments/1oykeny/dgx_spark_issues_with_qwen_models/
false
false
https://b.thumbs.redditm…Cvu5m74RJpVg.jpg
0
{'enabled': True, 'images': [{'id': 'Jfz-YjBp9Oz0QChClaM5URjmsW_BDYXfcYYYSdUxd3E', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?width=108&crop=smart&auto=webp&s=0a3e12597c2abdff12bb2ea752133270cc548e0c', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?width=216&crop=smart&auto=webp&s=ab6d531bd81171360bee237ae4141bbdd5071a15', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?width=320&crop=smart&auto=webp&s=58180d63db0119ccd03bab068bc6794bce47780d', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?width=640&crop=smart&auto=webp&s=a6271b2d27c598c92e694abe34f9b454ac02c658', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?width=960&crop=smart&auto=webp&s=13504ae5c7127214273a164d0e58be9433bb4cbf', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?width=1080&crop=smart&auto=webp&s=4b9c5370d4afbd4529ece1d4ffb7a37ea77ce805', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/54z70yan3m1g1.jpeg?auto=webp&s=3e4f16cba36b274d4622d333cdc1732535a0b166', 'width': 3024}, 'variants': {}}]}
Wonderfully explained JSON vs TOON.
0
https://www.linkedin.com/posts/chakrabortypreetam_json-vs-toon-choosing-the-right-data-format-activity-7395781809118265345-OV4w?utm_medium=ios_app&rcm=ACoAABtmTHUBDlnpwc6UOGtrVwNAlT3GqBbDDv8&utm_source=social_share_send&utm_campaign=copy_link
2025-11-16T12:22:03
https://www.reddit.com/r/LocalLLaMA/comments/1oyke1n/wonderfully_explained_json_vs_toon/
Gullible-Paper-6828
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyke1n
false
null
t3_1oyke1n
/r/LocalLLaMA/comments/1oyke1n/wonderfully_explained_json_vs_toon/
false
false
self
0
null
New BERT-based Multilingual Chunking Model
82
Inspired by [chonky](https://github.com/mirth/chonky), I fine-tuned [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on nearly 11 billion tokens from more than 34 million Wikipedia articles to predict paragraph breaks. The resulting model can be used to split arbitrary natural language texts into semantic chunks. Link: [https://huggingface.co/mamei16/chonky\_distilbert-base-multilingual-cased](https://huggingface.co/mamei16/chonky_distilbert-base-multilingual-cased) # Features * Trained on 104 languages * Fast inference and low memory usage without requiring flash attention * Can process texts of arbitrary length with constant VRAM usage * Runs acceptably on CPU if needed # Known limitations * Only trained on natural language: Performance on mathematical expressions or code has not been tested. * Sometimes splits the items of numbered lists into separate chunks. * If a text contains a captioned table, the caption and the table may be split into separate chunks. # License The model is released under Apache 2.0 and fully open source. # How to use See [https://huggingface.co/mamei16/chonky\_distilbert-base-multilingual-cased#how-to-get-started-with-the-model](https://huggingface.co/mamei16/chonky_distilbert-base-multilingual-cased#how-to-get-started-with-the-model) I recommend using my fork of chonky, as it provides faster inference and improved post-processing. # Collections of related chunking models [https://huggingface.co/collections/mamei16/paragraph-splitting-chunking-models](https://huggingface.co/collections/mamei16/paragraph-splitting-chunking-models) [https://huggingface.co/collections/mirth/text-chunking-splitting-models](https://huggingface.co/collections/mirth/text-chunking-splitting-models)
2025-11-16T12:00:49
https://www.reddit.com/r/LocalLLaMA/comments/1oyjzjb/new_bertbased_multilingual_chunking_model/
LMLocalizer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyjzjb
false
null
t3_1oyjzjb
/r/LocalLLaMA/comments/1oyjzjb/new_bertbased_multilingual_chunking_model/
false
false
self
82
{'enabled': False, 'images': [{'id': 'sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?width=108&crop=smart&auto=webp&s=761d3e980a32e7941a6759f23af20d9568ac92be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?width=216&crop=smart&auto=webp&s=640d315a2c5f2f9949e3cae0ac63e4e8aaf2ac31', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?width=320&crop=smart&auto=webp&s=b7317829bf9b2cbb93574b79afeb26ae6529e394', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?width=640&crop=smart&auto=webp&s=748edf62150c6bcd596459ae16048d7c501c397d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?width=960&crop=smart&auto=webp&s=6e085d0bfd5bd23908acf608858f228ee9107913', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?width=1080&crop=smart&auto=webp&s=a94eeb2e6574e0422f83010a13f5907d6565741d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sjyHTy2RaeQ2M19cen-74KTW2VWLkx2FuUP7GANYwc4.png?auto=webp&s=cae35a184802393fd1653d8b86a1a133727e502f', 'width': 1200}, 'variants': {}}]}
Announcing Funcdex: the complete framework for building your own function-calling models
10
Hi, I'm Sid from Prem AI, and we’re open-sourcing Funcdex, the complete framework for building your own function-calling models. Funcdex outperforms most frontier models on narrow tasks - with support for 15 toolkit configurations (10 single, 5 multi-toolkit). Complex tool use traces aren't available publicly for training or evaluation. We make it possible for teams to build their own function-calling models with three key components: * First is the **Dataset**. We're releasing one of the largest multi-turn function calling datasets publicly available, with over 100k examples across 15 toolkit configurations covering Gmail, Calendar, Drive, Jira, Slack, Asana, Todoist, WhatsApp, Stripe, and others. This includes both single-toolkit scenarios and multi-toolkit combinations like Gmail plus Calendar or Drive plus Docs. * Second is **Synthesizer**, which is the complete agentic training data generation pipeline. This is the actual code and tutorials we used to create the dataset, and it lets you convert any OpenAPI spec into toolkit-specific training data with realistic agent traces and tool use patterns. You can generate training data for your own internal APIs or any other tools your team uses. * Third is **Funcdex**, our proof-of-concept fine-tune of Qwen3 models that optimizes for specific APIs. We trained two variants at 0.6B and 1.7B parameters, with versions hyper-optimized for exact API combinations like Gmail plus Calendar or Jira plus Slack. Funcdex-0.6B achieves 0.7 function call string match score versus GPT-5 Mini's 0.58, and Funcdex-1.7B reaches 0.81 on synthetic benchmarks using real API definitions. The smallest model costs $0.19 per evaluation compared to $99.71 for GPT-5 Mini.  We saw interesting training dynamics where early checkpoints sometimes outperformed final epochs, suggesting scope for optimization when targeting specific toolkits. Funcdex works best when you have well-defined API calling patterns, elaborate system prompts that constrain the problem space, and clear success criteria for what constitutes a correct function call. If you're building AI agents for broad, open-ended tasks, you'll want frontier models. If you're automating specific, repeatable workflows, this framework lets you build something better and cheaper. You can take the dataset and fine-tune your own models, or use Synthesizer to create training data for your specific tools and workflows, or use our models as a starting point and iterate from there.  We’re excited to see how Funcdex will be used across organisations. Model - [https://huggingface.co/prem-research/Funcdex-1.7B](https://huggingface.co/prem-research/Funcdex-1.7B) Synthesizer - [github.com/prem-research/Funcdex-Synthesizer](http://github.com/prem-research/Funcdex-Synthesizer) Dataset - [huggingface.co/datasets/prem-research/Funcdex-MT-Function-Calling](http://huggingface.co/datasets/prem-research/Funcdex-MT-Function-Calling) HF Collection - [https://huggingface.co/collections/prem-research/funcdex](https://huggingface.co/collections/prem-research/funcdex) Join the Prem community to chat and build with our team [here](https://discord.com/invite/tWwg9RSCXJ). Note on synthetic data limitations: We used synthetic data because real tool use traces don't exist publicly. This makes benchmarks easier to beat than real production scenarios. Frontier models perform better on edge cases and unexpected inputs, but for narrow, well-defined use cases with elaborate system prompts, specialized small models trained on synthetic data still outperform general large models on specific tasks. [Funcdex vs. other models](https://preview.redd.it/khtd8dm9yl1g1.png?width=1730&format=png&auto=webp&s=7630b8f4412ba1107dda91a10f85abe7cc6bea01)
2025-11-16T11:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1oyjw7i/announcing_funcdex_the_complete_framework_for/
backprophet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyjw7i
false
null
t3_1oyjw7i
/r/LocalLLaMA/comments/1oyjw7i/announcing_funcdex_the_complete_framework_for/
false
false
https://b.thumbs.redditm…ILglJSXOlLRw.jpg
10
null
Anime t2i dataset help
2
Hi. Im on a mission to create a massive dataset of almost all popular anime. (this is my first time making a dataset) I want that dataset to be flexible on characters and studio styles, so i took screencaps from [this](https://fancaps.net/anime/popular.php) website. I want this to be opensource. I have a few questions: I dont want to caption them in danbooru coz i want this dataset to be used in qwen image lora. And want to target general audience. These screencaps have watermarks. Should i just mention it in the caption or remove it completely using [this](https://app.eigenai.com/eigen-banana-qwen-image-edit.html) website? The characters in the dataset have diff outfits. Like mikasa with survay corps uniform, casuals etc. Should i use a special tag for each outfit or should i describe the outfit in detail instead? (That would mean that the dataset will also be flexible on character outfits, like jjk uniform, shinobi uniform etc). But the tags will be hard to maintain. I first started with 10 images but then thought 20 would be a good starting point. So should i increase or decrease images per character Im almost finished with Attack on titan dataset, so if someone wanna help in the cause with any oher anime (which i haven't seen), we can make a discord server
2025-11-16T11:43:52
https://www.reddit.com/r/LocalLLaMA/comments/1oyjomj/anime_t2i_dataset_help/
Brave-Hold-9389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oyjomj
false
null
t3_1oyjomj
/r/LocalLLaMA/comments/1oyjomj/anime_t2i_dataset_help/
false
false
self
2
null
How is my build for season of RTX?
0
I mean, other than low storage, I have tones of nvme in hand.
2025-11-16T11:27:01
https://www.reddit.com/r/buildapc/comments/1ovbfub/seasons_of_rtx_5000_dream_pc_build_contest_with/np49sku/
No_Night679
reddit.com
1970-01-01T00:00:00
0
{}
1oyje44
false
null
t3_1oyje44
/r/LocalLLaMA/comments/1oyje44/how_is_my_build_for_season_of_rtx/
false
false
default
0
null