title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a | 1 | [removed] | 2026-01-16T18:32:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qeo4g8/a/ | That-Commercial-3949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeo4g8 | false | null | t3_1qeo4g8 | /r/LocalLLaMA/comments/1qeo4g8/a/ | false | false | self | 1 | null |
Processor Binning | 0 | > | 2026-01-16T18:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qenxss/processor_binning/ | Same-Persimmon-6450 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qenxss | false | null | t3_1qenxss | /r/LocalLLaMA/comments/1qenxss/processor_binning/ | false | false | self | 0 | null |
performance benchmarks (72GB VRAM) - llama.cpp server - January 2026 | 102 | This is meant to demonstrate what models can (or can't) be realistically run and used on 72 GB VRAM.
My setup:
* Three RTX 3090 GPUs
* X399 motherboard + Ryzen Threadripper 1920X
* DDR4 RAM
I use the default `llama-fit` mechanism, so you can probably get better performance with manual `--n-cpu-moe` or `-ot` tuning.
I always use all three GPUs, smaller models often run faster with one or two GPUs.
I measure **speed only**, not accuracy, this says nothing about the quality of these models.
This is **not scientific at all** (see the screenshots). I simply generate two short sentences per model.
tokens/s:
ERNIE-4.5-21B-A3B-Thinking-Q8\_0 — **147.85**
Qwen\_Qwen3-VL-30B-A3B-Instruct-Q8\_0 — **131.20**
gpt-oss-120b-mxfp4 — **130.23**
nvidia\_Nemotron-3-Nano-30B-A3B — **128.16**
inclusionAI\_Ling-flash-2.0-Q4\_K\_M — **116.49**
GroveMoE-Inst.Q8\_0 — **91.00**
Qwen\_Qwen3-Next-80B-A3B-Instruct-Q5\_K\_M — **68.58**
Solar-Open-100B.q4\_k\_m — **67.15**
ai21labs\_AI21-Jamba2-Mini-Q8\_0 — **58.53**
ibm-granite\_granite-4.0-h-small-Q8\_0 — **57.79**
GLM-4.5-Air-UD-Q4\_K\_XL — **54.31**
Hunyuan-A13B-Instruct-UD-Q6\_K\_XL — **45.85**
dots.llm1.inst-Q4\_0 — **33.27**
Llama-4-Scout-17B-16E-Instruct-Q5\_K\_M — **33.03**
mistralai\_Magistral-Small-2507-Q8\_0 — **32.98**
google\_gemma-3-27b-it-Q8\_0 — **26.96**
MiniMax-M2.1-Q3\_K\_M — **24.68**
EXAONE-4.0-32B.Q8\_0 — **24.11**
Qwen3-32B-Q8\_0 — **23.67**
allenai\_Olmo-3.1-32B-Think-Q8\_0 — **23.23**
NousResearch\_Hermes-4.3-36B-Q8\_0 — **21.91**
ByteDance-Seed\_Seed-OSS-36B-Instruct-Q8\_0 — **21.61**
Falcon-H1-34B-Instruct-UD-Q8\_K\_XL — **19.56**
Llama-3.3-70B-Instruct-Q4\_K\_M — **19.18**
swiss-ai\_Apertus-70B-Instruct-2509-Q4\_K\_M — **18.37**
Qwen2.5-72B-Instruct-Q4\_K\_M — **17.51**
Llama-3.3-Nemotron-Super-49B-v1\_5-Q8\_0 — **16.16**
Qwen3-VL-235B-A22B-Instruct-Q3\_K\_M — **13.54**
Mistral-Large-Instruct-2407-Q4\_K\_M — **6.40**
grok-2.Q2\_K — **4.63** | 2026-01-16T18:15:43 | https://www.reddit.com/gallery/1qennp2 | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qennp2 | false | null | t3_1qennp2 | /r/LocalLLaMA/comments/1qennp2/performance_benchmarks_72gb_vram_llamacpp_server/ | false | false | 102 | null | |
aña | 0 | mujer tetona realista
| 2026-01-16T18:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qenmva/aña/ | That-Commercial-3949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qenmva | false | null | t3_1qenmva | /r/LocalLLaMA/comments/1qenmva/aña/ | false | false | self | 0 | null |
How do I find out the format of text that the llm needs to train on? | 1 | I have been trying to train gemma 2b and the response keeps going infinitely , it keeps making its own convos | 2026-01-16T18:14:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qenm7i/how_do_i_find_out_the_format_of_text_that_the_llm/ | SAY_GEX_895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qenm7i | false | null | t3_1qenm7i | /r/LocalLLaMA/comments/1qenm7i/how_do_i_find_out_the_format_of_text_that_the_llm/ | false | false | self | 1 | null |
llama.cpp server performance benchmarks (72GB VRAM) - January 2026 | 1 | [deleted] | 2026-01-16T18:10:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qenimr | false | null | t3_1qenimr | /r/LocalLLaMA/comments/1qenimr/llamacpp_server_performance_benchmarks_72gb_vram/ | false | false | default | 1 | null | ||
CPU-only experiment | 4 | I’ve been testing a transformation layer on GPT-2 that doesn’t change weights or retrain, but changes how much of the model actually “wakes up” per prompt.
On CPU, same prompts:
NLL stays close to baseline
hidden-state cosine ≈ 0.999
latency consistently drops \~10–15%
The main change is scale (energy), not direction.
It’s still early and clearly overkill for tiny models, but I’m curious how this would behave on larger ones where CPU/memory is the bottleneck.
Has anyone here played with activation sparsity or post-hoc calibration without pruning or quantization? | 2026-01-16T18:10:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qenhvt/cpuonly_experiment/ | Safe-Yellow2951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qenhvt | false | null | t3_1qenhvt | /r/LocalLLaMA/comments/1qenhvt/cpuonly_experiment/ | false | false | self | 4 | null |
Is there a way to use OpenCode as a GUI sidebar instead of the ugly CLI terminal inside of VSCode? | 1 | I really like using it but it’s hard to read anything it says. I tried installing opencode and OpenCode GUI extensions by I just get a blank GitHub CoPilot looking sidebar that just says OPENCODE New Session and a blank chat box where nothing works. Any help would be appreciated | 2026-01-16T17:59:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qen6vj/is_there_a_way_to_use_opencode_as_a_gui_sidebar/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qen6vj | false | null | t3_1qen6vj | /r/LocalLLaMA/comments/1qen6vj/is_there_a_way_to_use_opencode_as_a_gui_sidebar/ | false | false | self | 1 | null |
Asking for advice what i can realistically expect from my system | 0 | The society i am working for just aquired hardware with the main purpose of video editing. I am personally pretty confident in using LLMs either through api through a frontend like [msty.ai](http://msty.ai) or similar, and using it through code (mainly python). So far i just do most stuff through api, as my laptop barely can run the super small models - and it was not really useful for anything.
But as i understand there is a pretty big overlap in the requirements for video editing (grafic card, ram etc.). The specs of the system are as follows:
* **GPU:** NVIDIA GeForce RTX 5070 Ti with **16 GB GDDR7 VRAM**
* **CPU:** AMD Ryzen 7 7800X3D (8 Cores / 16 Threads)
* **RAM:** **32 GB** DDR5-6000 (with an upgrade path to 256 GB)
* **Storage:** 2 TB NVMe PCIe 4.0 SSD (Read: 7300 MB/s, Write: 6600 MB/s)
* **OS:** Windows 11
My question is, what can i realistically expect to run at speeds that make it useable in real workflows? | 2026-01-16T17:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qemz9j/asking_for_advice_what_i_can_realistically_expect/ | abhuva79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qemz9j | false | null | t3_1qemz9j | /r/LocalLLaMA/comments/1qemz9j/asking_for_advice_what_i_can_realistically_expect/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=108&crop=smart&auto=webp&s=2537cf4308678c6acddfcb1f9c162c024ed3fafe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=216&crop=smart&auto=webp&s=7ead559e05507457ffb891b960c2188b02a7d463', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=320&crop=smart&auto=webp&s=b9a741c119c9604bd594cbad99f9ed1f2155cc32', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=640&crop=smart&auto=webp&s=18e5bf5262c3640705f3d555b2e7b421cfba48ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=960&crop=smart&auto=webp&s=6932a5d243002aa6f62216cb9c5bc82762c796b9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=1080&crop=smart&auto=webp&s=ee67840ce2bf8506eaba93d59680bd0b5947d3f7', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?auto=webp&s=f1465f4ceffa06aa4a55c5af3b8462a16a80b21b', 'width': 1600}, 'variants': {}}]} |
What middleware do you use with LLM? (OpenCode/Continue/Roo/Cline) | 1 | Howdy,
Every time I find a good LLM model, the problem is the middleware - it doesn't do agent mode correctly with that new model.
What has been working for yall? What are your combinations?(LLM + Middleware)
I tried Roo/Cline/Continue/Github Copilot and most of them are barely working with some of the models I use.
Recent find for me was GLM-4.7VL:Flash + Continue = this combo works really well. I wish it was a 20B-30B model.
| 2026-01-16T17:38:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qeml8b/what_middleware_do_you_use_with_llm/ | grabber4321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeml8b | false | null | t3_1qeml8b | /r/LocalLLaMA/comments/1qeml8b/what_middleware_do_you_use_with_llm/ | false | false | self | 1 | null |
What orgs/models can I trust on hugging face? | 2 | I am particularly concerned with the security vulnerabilities of LLM file formats downloaded from Hugging Face. I am running llama.cpp locally that requires GGUF models. However not all official orgs on hugging face list GGUF models. Instead they use safetensor format.
My question relates to say https://huggingface.co/unsloth - these guys create GGUF models from safetensor, but they are unofficial on hugging face. Do you trust them and other orgs? How do you calculate the risk of https://www.databricks.com/blog/ggml-gguf-file-format-vulnerabilities ? | 2026-01-16T17:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qemk3e/what_orgsmodels_can_i_trust_on_hugging_face/ | noodler-io | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qemk3e | false | null | t3_1qemk3e | /r/LocalLLaMA/comments/1qemk3e/what_orgsmodels_can_i_trust_on_hugging_face/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=108&crop=smart&auto=webp&s=305a70e8c82e5c0a94fb3ba2ee9df26c9b46914f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=216&crop=smart&auto=webp&s=cb27b19d48faec1a1b9eb8d5977c1c5dc9b60ce9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=320&crop=smart&auto=webp&s=17894ebb2ab4b6a2595f8ef54d10ed9c6f3670cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=640&crop=smart&auto=webp&s=980118277fff46b9a8e1b486d83ba01a5045e9a9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=960&crop=smart&auto=webp&s=e2f5464545b7a0e8b1172bf0c91182a19e11edf3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=1080&crop=smart&auto=webp&s=f9074f9f7d7985d6799aab5078f32476394a2e67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?auto=webp&s=e56082d18db2b9b44c9a8404db67a6a0159b5aaa', 'width': 1200}, 'variants': {}}]} |
GLM Image Studio with web interface is on GitHub Running GLM-Image (16B) on AMD RX 7900 XTX via ROCm + Dockerized Web UI | 0 | Hi everyone,
Given the recent interest in the new GLM-Image generation models, I decided to build a project to get this running smoothly on my setup. It started as a personal experiment but evolved into a complete Dockerized solution that I think the community might find useful, especially for those on Team Red (AMD).
**🎯 Project Goals**
1. **Clean & Isolated Environment:** Run GLM-Image inside **Docker** to keep my host system (Manjaro Linux) clean and ensure reproducibility.
2. **AMD ROCm Support:** Leverage the **Radeon RX 7900 XTX (24GB)** using native AMD ROCm drivers, avoiding the usual CUDA-only barriers.
3. **User-Friendly UI:** Create a reactive Web Interface (Gradio) for easy testing, featuring both Text-to-Image and Image-to-Image.
4. **AI-Assisted Development:** I used this project to stress-test the **Antigravity IDE** paired with **Gemini Pro**, specifically to see how well an AI assistant could adapt code for the AMD ecosystem (which often lacks native support in standard tutorials).
**🛠️ Implementation & Features**
I’ve released the full code on **GitHub under the MIT License**. The implementation is fully functional for both T2I and I2I. The UI includes some "quality of life" automations, such as reactive sliders that lock aspect ratios to prevent distortions and automatic resizing to meet the model's stride requirements (multiples of 32px).
* **GitHub Link:** \[Insert Your GitHub Link Here\]
* **Call to Action:** I’d love for someone with a solid NVIDIA setup to fork this and adapt the Dockerfile/scripts for CUDA support!
**⚡ Performance & Memory Analysis**
* **Speed:** Generating a **1536x1024** image takes about **4-5 seconds/it** on my 7900 XTX. Not instantaneous, but considering the two-stage process of GLM (prompt understanding + expansion), it's quite usable.
* **VRAM Management:** This was the critical part. The full model exceeds 30GB. Thanks to Gemini's code suggestions, we implemented a **Sequential CPU Offload** strategy (and manual Vision Encoder handling for Img2Img). This effectively cuts active VRAM usage in half without killing performance.
* **Hardware Impact:** It runs surprisingly cool. CPU load is negligible, and the GPU doesn't overheat.
* **System RAM:** Note that while VRAM is managed, system RAM usage is significant (I have 64GB). It should be viable on 16GB VRAM cards, provided you have enough system RAM for offloading.
**🎨 Quality Impressions**
* **The Look:** Images can sometimes have that "plastic/synthetic" texture typical of experimental models.
* **The Good:** Prompt adherence is fantastic. The model "understands" complex instructions very well. Details in backgrounds, skies, and objects are rich, though faces/skin tones still need some work compared to mature models.
**🔮 Future Wishlist**
If anyone from the ZhipuAI/GLM team is reading this: *Kudos on the release!* One feature I desperately want is an exposed **Image-to-Text** pipeline. During my tests with Image-to-Image, I noticed the model grasps the input image context exceptionally well. Accessing this understanding for captioning or scene description would be a game-changer, especially for video production workflows. | 2026-01-16T17:26:28 | Expert_Sector_6192 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qem9kv | false | null | t3_1qem9kv | /r/LocalLLaMA/comments/1qem9kv/glm_image_studio_with_web_interface_is_on_github/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xg8za9s0xqdg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?width=108&crop=smart&auto=webp&s=74a2f777bd488c49fd6c37335deb1510e060f7a8', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?width=216&crop=smart&auto=webp&s=be22982f68ba1dd8e08bb2ad232b04cbf3735de1', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?width=320&crop=smart&auto=webp&s=498423428cff228f0d352e8a387f7379bd397680', 'width': 320}, {'height': 715, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?width=640&crop=smart&auto=webp&s=7845716223115b322e9ecf64259519c847b7e7ca', 'width': 640}, {'height': 1073, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?width=960&crop=smart&auto=webp&s=8a7ab9f2ddf2a9a19bd6019078895949d4df1444', 'width': 960}, {'height': 1207, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?width=1080&crop=smart&auto=webp&s=866b697284591b943c1aeb233e21f37711fbf23c', 'width': 1080}], 'source': {'height': 1732, 'url': 'https://preview.redd.it/xg8za9s0xqdg1.png?auto=webp&s=b4c6a94848b4adf915231df5b7b235e20440e42b', 'width': 1549}, 'variants': {}}]} | |
We fine-tuned an email classification model so you can auto-label your emails locally with n8n. | 1 | [removed] | 2026-01-16T17:20:36 | party-horse | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qem3ee | false | null | t3_1qem3ee | /r/LocalLLaMA/comments/1qem3ee/we_finetuned_an_email_classification_model_so_you/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'oadf4wv5wqdg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?width=108&crop=smart&auto=webp&s=8f1b74ab6a6e6524eaf799f083e0becedc7d3c5a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?width=216&crop=smart&auto=webp&s=3a279ed61d4b06cc0c35e071c21ad4e19380e3ca', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?width=320&crop=smart&auto=webp&s=17dde93a51fe7dca1e5d49afabc281b1a02c6596', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?width=640&crop=smart&auto=webp&s=d73920014afe7f8359d40558bea7ad7cfb2d80aa', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?width=960&crop=smart&auto=webp&s=97b259dff4664903dcc6275f0147f9692399e87e', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?width=1080&crop=smart&auto=webp&s=23e62a6fa70662133b44cd021b2655f80cb1372b', 'width': 1080}], 'source': {'height': 3375, 'url': 'https://preview.redd.it/oadf4wv5wqdg1.jpeg?auto=webp&s=16645ca37acf0ea3a963835e3e7c01e403c8b1d6', 'width': 6000}, 'variants': {}}]} | |
Glm 4.7 is quite useless | 0 | Good at multi file reading and bug fixes copied from terminal.
But really, cant follow a simple instruction. Even if u say dont efff do xyz, it still gonna do it.
And tonight, looping through same thought process for 30 minutes.
Guess am gonna go back go prioritizing CC. | 2026-01-16T17:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/1qelu7q/glm_47_is_quite_useless/ | Big-Suggestion-7527 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qelu7q | false | null | t3_1qelu7q | /r/LocalLLaMA/comments/1qelu7q/glm_47_is_quite_useless/ | false | false | self | 0 | null |
Measuring color in e17a 1850, e21a 2000k, ntg50 1800k, 519A dd 2700k | 1 | NTG-50 is a bit rosier than the others. Just a bit of blue bump around 450 nm. I think I prefer neutral but it's still a very nice emitter. | 2026-01-16T17:05:52 | https://www.reddit.com/gallery/1qeloov | technaturalism | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qeloov | false | null | t3_1qeloov | /r/LocalLLaMA/comments/1qeloov/measuring_color_in_e17a_1850_e21a_2000k_ntg50/ | false | false | 1 | null | |
Does adding a third slower GPU to a tensor parallelism setup slow it down? | 1 | Hey, I am currently running the new IK-llama with tensor parallelism on my dual 3090 setup, I have a old m40 with 24gb in storage and was wondering if it is worth buying the adapters to add it to my system or if it would slow everything down. | 2026-01-16T17:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qelni1/does_adding_a_third_slower_gpu_to_a_tensor/ | MaruluVR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qelni1 | false | null | t3_1qelni1 | /r/LocalLLaMA/comments/1qelni1/does_adding_a_third_slower_gpu_to_a_tensor/ | false | false | self | 1 | null |
Need an opinion - Do I need a new laptop? | 2 | Hey everyone,
I’m new to the AI/ML field and I need some honest advice.
Right now I have a laptop with a GTX 1650. I want to start building and training AI projects, but I also have a Google Colab Pro subscription through my university.
I’m not sure if it’s worth buying a new laptop just for AI work, or if I should stick with my current system + Colab Pro.
**A bit more context:**
* I’m a beginner in AI/ML
* I plan to learn and build real projects (not just toy examples)
* I have Colab Pro (so faster GPUs + longer runtimes than free)
* My laptop is fine for general use but the 1650 isn’t great for heavy training
**Questions:**
1. Is the GTX 1650 enough for learning and small-medium projects?
2. Since I have Colab Pro, is buying a new laptop worth it right now?
3. Would you recommend waiting until I actually need it (e.g., for internships or professional work)?
4. If I should get a new laptop, what specs should I aim for? | 2026-01-16T16:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qelh4z/need_an_opinion_do_i_need_a_new_laptop/ | Hairy-Spring-144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qelh4z | false | null | t3_1qelh4z | /r/LocalLLaMA/comments/1qelh4z/need_an_opinion_do_i_need_a_new_laptop/ | false | false | self | 2 | null |
vLLM-MLX: Native Apple Silicon LLM inference - 464 tok/s on M4 Max | 83 | Hey everyone!
I built vLLM-MLX - a framework that uses Apple's MLX for native GPU acceleration.
**What it does:**
\- OpenAI-compatible API (drop-in replacement for your existing code)
\- Multimodal support: Text, Images, Video, Audio - all in one server
\- Continuous batching for concurrent users (3.4x speedup)
\- TTS in 10+ languages (Kokoro, Chatterbox models)
\- MCP tool calling support
**Performance on M4 Max:**
\- Llama-3.2-1B-4bit → 464 tok/s
\- Qwen3-0.6B → 402 tok/s
\- Whisper STT → 197x real-time
Works with standard OpenAI Python SDK - just point it to localhost.
**GitHub:** [https://github.com/waybarrios/vllm-mlx](https://github.com/waybarrios/vllm-mlx)
Happy to answer questions or take feature requests! | 2026-01-16T16:56:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qeley8/vllmmlx_native_apple_silicon_llm_inference_464/ | waybarrios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeley8 | false | null | t3_1qeley8 | /r/LocalLLaMA/comments/1qeley8/vllmmlx_native_apple_silicon_llm_inference_464/ | false | false | self | 83 | {'enabled': False, 'images': [{'id': 'fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?width=108&crop=smart&auto=webp&s=892966bdba333a956bfe2c383f77762610855143', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?width=216&crop=smart&auto=webp&s=46a40ad70ba8bde07abdce77f8bf351e7ecb20d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?width=320&crop=smart&auto=webp&s=bf1f1592fa93dbb1ad6e9f8cff0be4a5ced5d45c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?width=640&crop=smart&auto=webp&s=0f8539fe5b27ab3b2c6f93564a02063fa844cda6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?width=960&crop=smart&auto=webp&s=4d4a56221df1f2350c53d2e8b1ac5848eb2a5e8b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?width=1080&crop=smart&auto=webp&s=836e2b9dd0d2699bec2d75e7abd0f288b9cea45b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fBTqn5A_VMK5OvBsBnb5b0JpYvCkmnptt_UHJ212bNQ.png?auto=webp&s=8a038454e6a2762a1a543f606fb53ae0422e549f', 'width': 1200}, 'variants': {}}]} |
Is the AI bubble bursting? | 0 | I read a couple posts recently about an AI bubble burst. But, I don't see how that is technically measured. How do we even know if its busted? I can see there are several bottlenecks atm, but these have always been bottlenecks:
* new models takes tens to hundreds of millions of dollars, is there a proven guaranteed payback? Barrier to entry is difficult. OS is even just now
* OS lags Frontier models by no more than 1 year, with some benchmarks they are even head to head as found by google research: [https://gemini.google.com/share/fd906e962a3b](https://gemini.google.com/share/fd906e962a3b)
* Hardware! They are all dependent on GPU type hardware that is really expensive and hard to get. This is the true bottleneck. Inference at scale is costly.
* Energy requirements. Its like running heaters nonstop. The power grid can't handle the growth. The infastructure isn't there yet. They need to build more generation, substations, interconnections, etc. to have more data centers. So, the which takes years to update, is a true bottleneck.
* Training Data. High quality Human data is finite. I read where people were creating datasets pre-chatgpt and only using those, since AI generated content exploded on the internet, blurring fidelity.
If there is a bubble burst then my AI HLE timeline is shit:
[https://epicshardz.github.io/thelastline/](https://epicshardz.github.io/thelastline/)
Can we or can we not assume Moores Law of growth? Please let me know what indicators track bubble burst here and how its measured? Like how close are we, or is it a risk factor, or what? | 2026-01-16T16:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qel4cg/is_the_ai_bubble_bursting/ | redlikeazebra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qel4cg | false | null | t3_1qel4cg | /r/LocalLLaMA/comments/1qel4cg/is_the_ai_bubble_bursting/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'dZZxNPpUzG3_qfMv0TmvcFe-gxuWWJOHArugYImaRLM', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/dZZxNPpUzG3_qfMv0TmvcFe-gxuWWJOHArugYImaRLM.jpeg?width=108&crop=smart&auto=webp&s=b63e107c57e62e6525e751e2ceb13dc3f44be30d', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/dZZxNPpUzG3_qfMv0TmvcFe-gxuWWJOHArugYImaRLM.jpeg?width=216&crop=smart&auto=webp&s=ec0f454405a0a80c221f2f1969ed5cf58d3772a2', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/dZZxNPpUzG3_qfMv0TmvcFe-gxuWWJOHArugYImaRLM.jpeg?width=320&crop=smart&auto=webp&s=dd70424772e217b904417d3f5ac4e662ede153f6', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/dZZxNPpUzG3_qfMv0TmvcFe-gxuWWJOHArugYImaRLM.jpeg?width=640&crop=smart&auto=webp&s=f78ca2510202671a704ea2088e0b8b95c5ee21d6', 'width': 640}], 'source': {'height': 352, 'url': 'https://external-preview.redd.it/dZZxNPpUzG3_qfMv0TmvcFe-gxuWWJOHArugYImaRLM.jpeg?auto=webp&s=a20864fb253b3ae0ecee06600f3ca5636c0e9592', 'width': 800}, 'variants': {}}]} |
TUI tool to manage prompts locally: git-native, composable, and dynamic | 12 | Hi everyone,
I got tired of managing my system prompts in random text files, sticky notes, or scrolling back through endless chat history to find "that one prompt that actually worked."
I believe prompts are code. They should live in your repo, get versioned, and be reviewed.
So I built **piemme**. It’s a TUI written in Rust to manage your prompts right in the terminal.
**What it actually does:**
* **Local & Git-friendly:** Prompts are just Markdown files stored in a `.piemme/` folder in your project. You can `git diff` them to see how changes affect your results.
* **Composition:** You can treat prompts like functions. If you have a base prompt for `coding_standards`, you can import it into another prompt using `[[coding_standards]]`.
* **Dynamic Context:** This is the feature I use the most. You can embed shell commands. If you write `{{ls -R src/}}` inside your prompt, `piemme` executes it and pipes the file tree directly into the context sent to the LLM.
* **Fast:** It’s Rust. It opens instantly.
* **Vim Keybindings:** Because I can't use a tool without them.
**Why I made it:** We use this internally at my company (Cartesia) to move away from vibe-coding towards a more engineered approach where prompts are versioned dependencies.
It’s open source (MIT).
**Repo:** [https://github.com/cartesia-one/piemme](https://github.com/cartesia-one/piemme)
**Blog post**: [https://blog.cartesia.one/posts/piemme/](https://blog.cartesia.one/posts/piemme/) | 2026-01-16T16:41:43 | poppear | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qel026 | false | null | t3_1qel026 | /r/LocalLLaMA/comments/1qel026/tui_tool_to_manage_prompts_locally_gitnative/ | false | false | default | 12 | {'enabled': True, 'images': [{'id': '7e4diyskoqdg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=108&crop=smart&format=png8&s=c22bb38b82f9c3cb78b27ad26ea983c0f6ffa47c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=216&crop=smart&format=png8&s=63a4e93ef5fe0b9545fad4d42516a419f1a400cb', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=320&crop=smart&format=png8&s=59d3ca36d03d3269671bf95e1315bbb1295db0d1', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=640&crop=smart&format=png8&s=11431585daf99374f162af42d70071de61d46029', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=960&crop=smart&format=png8&s=877e66b7cc84b7b571c516d93b82efb2c16f22c0', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=1080&crop=smart&format=png8&s=c0b3a0313c8f1f6d4b029530c47afc9c23dacd14', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?format=png8&s=4beb0c0b739525791daded58c10ee84a0ce4b95f', 'width': 1600}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=108&crop=smart&s=637a9ec7884b632a48c50736beb4db1ee5fbf6fe', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=216&crop=smart&s=729c1e4f5fa13b73e21498aae15599e8b8f8a991', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=320&crop=smart&s=03d44e824759441b2f97ff084beb4333047a8def', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=640&crop=smart&s=f308866d28f4cdb1574768adceee4eeb7da0a9de', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=960&crop=smart&s=e69b00d93194bbcf11708d8cf5d2250275ddc58b', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=1080&crop=smart&s=a6777b0b8fd583082bcfa7e7c36bd65a4af2c644', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?s=78a809bf1cd04d4380e25b7e084c94ce86675202', 'width': 1600}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=108&format=mp4&s=872b9e11e8946ff7ab2ac8ed2f07990fe5345066', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=216&format=mp4&s=f1b2421cd18c688284055953a273f6b160234045', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=320&format=mp4&s=6934679441df0b5184958cf6763d1bb16675aefd', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=640&format=mp4&s=f86be821e022bb6d014eedc5857f46eb558353ff', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=960&format=mp4&s=33d55281a0ace5a8b0bc4e3712ad876399ae4626', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?width=1080&format=mp4&s=2a8709836eb2719ebd68ee69733739e56d636100', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/7e4diyskoqdg1.gif?format=mp4&s=ec2c17018ffcf138088a7a8120b25614cff6165d', 'width': 1600}}}}]} | |
Llama.cpp question- is llama-fit-params “built-into” llama-server? | 1 | Do I still need to adjust arguments manually or is the llama-fit-params optimizer built into the llama-server command? For some reason I’m getting more tokens/second than expected when running llama-server now | 2026-01-16T16:23:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qekhhy/llamacpp_question_is_llamafitparams_builtinto/ | Careful_Breath_1108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qekhhy | false | null | t3_1qekhhy | /r/LocalLLaMA/comments/1qekhhy/llamacpp_question_is_llamafitparams_builtinto/ | false | false | self | 1 | null |
Looking for an LLM that can take listen to a discord call and take notes for a DND group | 1 | Title, I’m looking for something that I can self host for privacy reasons. It needs to be able to work without being connected to the internet, I really don’t want to deal with any privacy issues for me and my friends.
I saw JotIt(?) that looked like it may be usable but I wanted more input.
Thank you! | 2026-01-16T16:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qekg27/looking_for_an_llm_that_can_take_listen_to_a/ | ZexanAK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qekg27 | false | null | t3_1qekg27 | /r/LocalLLaMA/comments/1qekg27/looking_for_an_llm_that_can_take_listen_to_a/ | false | false | self | 1 | null |
Big Data Just Wants More Data | 0 | Four weeks ago I genuinely didn’t think this would be possible on 6GB of VRAM, but here we are.
ChatGPT is moving toward serving ads.
Anthropic’s “CoWork” doesn’t really feel like it was built to help anyone get real work done.
Google is out here talking about “personal intelligence,” which so far just sounds like a chatbot that looks up Google stuff.
It’s starting to feel like Facebook all over again: harvest user data, provide questionable value, then sell users either to the highest ad bidder or into increasingly expensive subscriptions.
Meanwhile, on the local side of things, I’m running GPT OSS 20B in my browser on a 6GB VRAM GPU with 32GB DDR5. No cloud. No telemetry. No ads. Just models and hardware I control.
I attached a video of it running in-browser because honestly, that’s the part that still feels a little unreal to me.
Curious what everyone else has been building lately.
Local models? Weird setups? Actually useful workflows?
Feels like the real innovation is happening way outside the big platforms right now.
(Yes the comment posted but I didn't want to reveal my Instagram lol) | 2026-01-16T16:16:28 | https://v.redd.it/slhw1naxiqdg1 | Serious_Molasses313 | /r/LocalLLaMA/comments/1qekala/big_data_just_wants_more_data/ | 1970-01-01T00:00:00 | 0 | {} | 1qekala | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/slhw1naxiqdg1/DASHPlaylist.mpd?a=1771301793%2CZTVhM2Q0OWYxNWZlOGRjZTBkOWNiY2RjODYyZTg1NTkxZjg2ZWIwZWI4N2Q0MDQ3NTcyY2EwMTc2ZmIzN2Q0Nw%3D%3D&v=1&f=sd', 'duration': 330, 'fallback_url': 'https://v.redd.it/slhw1naxiqdg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/slhw1naxiqdg1/HLSPlaylist.m3u8?a=1771301793%2CMzMxMmYyNWU3OTMxOTY2ZTNlZDQ4ODRlZThlYTQxMDM5NzY3MjJlYzIzZGNkOGRmZjQ0NGIzZGUyMzIyNjdmNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/slhw1naxiqdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}} | t3_1qekala | /r/LocalLLaMA/comments/1qekala/big_data_just_wants_more_data/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OWwwbjJzYnhpcWRnMcw-wWf46t4ZZKQKKuXqiDwB9i1UOIW47qF4_BXVG532', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OWwwbjJzYnhpcWRnMcw-wWf46t4ZZKQKKuXqiDwB9i1UOIW47qF4_BXVG532.png?width=108&crop=smart&format=pjpg&auto=webp&s=e1582fb1962419bbe2adc0379b406ef827325036', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/OWwwbjJzYnhpcWRnMcw-wWf46t4ZZKQKKuXqiDwB9i1UOIW47qF4_BXVG532.png?width=216&crop=smart&format=pjpg&auto=webp&s=24ab8ee5dbd9c092c769069c5232bcfbc84e00f3', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/OWwwbjJzYnhpcWRnMcw-wWf46t4ZZKQKKuXqiDwB9i1UOIW47qF4_BXVG532.png?width=320&crop=smart&format=pjpg&auto=webp&s=e9817ff8916532232693a8803b51ed324df966f7', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/OWwwbjJzYnhpcWRnMcw-wWf46t4ZZKQKKuXqiDwB9i1UOIW47qF4_BXVG532.png?width=640&crop=smart&format=pjpg&auto=webp&s=1fd5117a16cf61c914137298182325128ef883a2', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/OWwwbjJzYnhpcWRnMcw-wWf46t4ZZKQKKuXqiDwB9i1UOIW47qF4_BXVG532.png?format=pjpg&auto=webp&s=20f0b10c4e701e35ceccb862e79a9f37c2559306', 'width': 805}, 'variants': {}}]} | |
I reproduced DeepSeek's mHC at 1.7B params (8xH100). The instability is 3x worse than reported (10k vs 3k), but the model didn't explode. | 168 | Hey everyone,
Following up on my previous post about reproducing the DeepSeek-V2/V3 architecture. I decided to bite the bullet and rent an H100 cluster to scale the "Hyper-Connections" (HC) experiment from 10M to 1.7B parameter
The DeepSeek paper warned that standard Hyper-Connections cause signal variance to explode by \~3,000x at 27B parameters. I wanted to see if that held true or if it was a theoretical upper bound.
**The Results:**
1. **It's worse than they said.** At just 1.7B parameters, I measured signal amplification of **10,924x**. The "Instability Bomb" is real.
2. **The "Twist":** Despite signals amplifying by 10,000x, the loss **didn't diverge**. The model kept learning. My theory is that modern optimizers (AdamW) and gradient clipping work overtime to mask the issue, but it's basically a ticking time bomb for longer runs.
3. **The Fix:** Verified that Manifold Hyper-Connections (mHC) with Sinkhorn projection completely solves this. Variance stays locked at 1.0x with zero compute overhead.
https://preview.redd.it/a1gsgd87kqdg1.png?width=4160&format=png&auto=webp&s=1d75dc5207b1401eed9fe3a8e3425e24fe560fc0
I wrote up the full breakdown with the loss curves and Amax graphs here: [https://taylorkolasinski.com/notes/mhc-reproduction-part2/](https://taylorkolasinski.com/notes/mhc-reproduction-part2/)
Part 1 can be found here: [https://taylorkolasinski.com/notes/mhc-reproduction/](https://taylorkolasinski.com/notes/mhc-reproduction/)
Also, there's a discussion on HN right now if you want to chat there: [https://news.ycombinator.com/newest?next=46647671&n=31](https://news.ycombinator.com/newest?next=46647671&n=31)
Happy to answer questions about the H100 setup or the implementation! | 2026-01-16T16:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qek917/i_reproduced_deepseeks_mhc_at_17b_params_8xh100/ | poisson_labs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qek917 | false | null | t3_1qek917 | /r/LocalLLaMA/comments/1qek917/i_reproduced_deepseeks_mhc_at_17b_params_8xh100/ | false | false | 168 | null | |
Mac Studio M2 Max 12/38. Is there a price point where this makes sense? | 1 | I have the opportunity to bid on one in an auction. Unknown how much RAM is in it so assuming it is the 32GB minimum. Is there a price point where this makes sense given its age and likely low amount of unified memory? | 2026-01-16T16:13:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qek7ei/mac_studio_m2_max_1238_is_there_a_price_point/ | Abarth_Vader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qek7ei | false | null | t3_1qek7ei | /r/LocalLLaMA/comments/1qek7ei/mac_studio_m2_max_1238_is_there_a_price_point/ | false | false | self | 1 | null |
Best workflow to anonymize a voice while preserving intonation? | 1 | Hi everyone,
I’m starting a podcast where I need to remain anonymous.
I don’t want to use text-to-speech or AI-generated voices.
What I need is:
\- Convert MY recorded voice into a different voice
\- Keep the same intonation, pauses and rhythm
\- Preserve the natural performance
\- Just make it unrecognizable
Basically: true speech-to-speech voice conversion.
I’ve tried ElevenLabs but it only generates a new voice from text, which is not what I want.
What tools or workflows would you recommend for this?
Thanks!
| 2026-01-16T15:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qejlzm/best_workflow_to_anonymize_a_voice_while/ | Glass_Score3977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qejlzm | false | null | t3_1qejlzm | /r/LocalLLaMA/comments/1qejlzm/best_workflow_to_anonymize_a_voice_while/ | false | false | self | 1 | null |
You can deploy AI agents to your own AWS (with Bedrock) or GCP (with Vertex AI) using one command | 0 | Wanted to share something we've been building that might be useful for people who want to run AI workloads on their own cloud infrastructure.
**The problem:** You build an AI agent locally. Works great. Now you need to deploy it somewhere. Options are:
1. Managed platforms (Replicate, Modal, etc.) — your data goes through their servers
2. DIY on AWS/GCP — spend days configuring ECS, IAM, VPCs, etc.
3. Self-host on your own hardware — great but doesn't scale
**What we built:** Defang lets you deploy any containerized app (including AI agents) to your own AWS or GCP account with one command. You write a compose.yaml, run `defang compose up`, and it provisions production-grade infrastructure in your cloud.
**The LLM-specific part:**
If you add `x-defang-llm: true` to your service, we auto-configure access to:
* **AWS Bedrock** (Claude, Llama, Mistral, etc.)
* **GCP Vertex AI** (Gemini, etc.)
yaml
services:
my-agent:
build: .
x-defang-llm: true
Your agent gets IAM permissions to call these APIs without you touching IAM policies.
**Why this matters for self-hosters:**
* Your infrastructure, your AWS/GCP account
* Data stays in your cloud, never touches our servers
* Works with CrewAI, LangGraph, AutoGen, n8n, Mastra, or any framework
* Free for open-source!
**Not trying to replace local inference** — if you're running models on your own GPU, that's awesome. This is more for people who want managed LLM APIs (Bedrock/Vertex) but don't want to deal with AWS/GCP infrastructure complexity.
Happy to answer questions.
Wanted to share something we've been building that might be useful for people who want to run AI workloads on their own cloud infrastructure.
The problem: You build an AI agent locally. Works great. Now you need to deploy it somewhere. Options are:
1. Managed platforms (Replicate, Modal, etc.) — your data goes through their servers
2. DIY on AWS/GCP — spend days configuring ECS, IAM, VPCs, etc.
3. Self-host on your own hardware — great but doesn't scale
What we built: Defang lets you deploy any containerized app (including AI agents) to your own AWS or GCP account with one command. You write a compose.yaml, run defang compose up, and it provisions production-grade infrastructure in your cloud.
The LLM-specific part:
If you add x-defang-llm: true to your service, we auto-configure access to:
* AWS Bedrock (Claude, Llama, Mistral, etc.)
* GCP Vertex AI (Gemini, etc.)
yaml
services:
my-agent:
build: .
x-defang-llm: true
Your agent gets IAM permissions to call these APIs without you touching IAM policies.
We're launching v3 next week with some stuff that might be relevant here:
* Agentic CLI — deploy with English commands, auto-debugs and fixes deployment issues
* Named Stacks — spin up isolated environments (dev/staging/prod) or separate instances per customer from the same codebase
* Zero-config AWS — one click to connect your AWS account, no IAM policies to write
* Pre-built templates for CrewAI, LangGraph, AutoGen, n8n, Mastra, Strands
Why this matters for self-hosters:
* Your infrastructure, your AWS/GCP account
* Data stays in your cloud, never touches our servers
* Free for public GitHub repos (forever, not a trial)
Not trying to replace local inference - if you're running models on your own GPU, that's awesome. This is more for people who want managed LLM APIs (Bedrock/Vertex) but don't want to deal with AWS/GCP infrastructure complexity.
Happy to answer any questions! | 2026-01-16T15:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qejg82/you_can_deploy_ai_agents_to_your_own_aws_with/ | DefangLabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qejg82 | false | null | t3_1qejg82 | /r/LocalLLaMA/comments/1qejg82/you_can_deploy_ai_agents_to_your_own_aws_with/ | false | false | self | 0 | null |
Anyone here using a local LLM with their note taking app? | 13 | I’ve been trying to simplify my note taking app setup and keep more things local for privacy reasons. Most apps are fine for storing notes, but the “thinking” part usually still happens in the cloud.
I use a regular note taking app just for storage, and sometimes Bluedot to capture meetings or study sessions and clean them up before saving anything long term. That works, but it’s not ideal.
Does anyone here is actually using a local model to help with note taking in a real, everyday workflow? | 2026-01-16T15:39:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qeja1b/anyone_here_using_a_local_llm_with_their_note/ | sash20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeja1b | false | null | t3_1qeja1b | /r/LocalLLaMA/comments/1qeja1b/anyone_here_using_a_local_llm_with_their_note/ | false | false | self | 13 | null |
Stop-on-mismatch input gate for local LLM workflows — feedback? | 0 | TL;DR: I saw posts about routing/gating, but not specifically “STOP when declared intent ≠ pasted content” as an input-discipline pattern.
I enforce a hard “STOP” when declared intent ≠ pasted content.
No guessing, no polite filler. Human must correct input, then the model runs.
Example: “I’m sending a prompt to edit” + random recipe => STOP + 1-line reason + 1 question.
Goal: reduce cognitive noise / avoid false-positive task switching.
I’m looking for:
1) Edge cases that will break this in real local workflows
2) Existing patterns/tools for an input gate
3) How you’d implement it robustly in a self-hosted stack
No product, no links — just sharing a workflow pattern.
| 2026-01-16T15:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qej7q9/stoponmismatch_input_gate_for_local_llm_workflows/ | Huge-Yesterday4822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qej7q9 | false | null | t3_1qej7q9 | /r/LocalLLaMA/comments/1qej7q9/stoponmismatch_input_gate_for_local_llm_workflows/ | false | false | self | 0 | null |
Don't fall into the anti-AI hype, AI coding assistants are getting worse? and many other AI links from Hacker News | 0 | Hey everyone, I just sent the [**16th issue of the Hacker News AI newsletter**](https://eomail4.com/web-version?p=ab55428a-f22a-11f0-b3e4-9dfbdaf613f3&pt=campaign&t=1768494452&s=5032ac0ee96c8226c6f81587ba20aa88cd143b8fdf504c29323e48c58717cf59), a curated round-up of the best AI links shared on Hacker News and the discussions around them. Here are some of them:
* Don't fall into the anti-AI hype (antirez.com) - [HN link](https://news.ycombinator.com/item?id=46574276)
* AI coding assistants are getting worse? (ieee.org) - [HN link](https://news.ycombinator.com/item?id=46542036)
* AI is a business model stress test (dri.es) - [HN link](https://news.ycombinator.com/item?id=46567392)
* Google removes AI health summaries (arstechnica.com) - [HN link](https://news.ycombinator.com/item?id=46595419)
If you enjoy such content, you can subscribe to my newsletter here: [**https://hackernewsai.com/**](https://hackernewsai.com/) | 2026-01-16T15:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qej7gm/dont_fall_into_the_antiai_hype_ai_coding/ | alexeestec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qej7gm | false | null | t3_1qej7gm | /r/LocalLLaMA/comments/1qej7gm/dont_fall_into_the_antiai_hype_ai_coding/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?width=108&crop=smart&auto=webp&s=d05dc773891b84b3001c7c1f8e03e20048f90e0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?width=216&crop=smart&auto=webp&s=73e35d77facd7aa4649da0eff047ec618e222ef1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?width=320&crop=smart&auto=webp&s=1fac2a16fc4fa55779f8571dd04cf64039afc13e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?width=640&crop=smart&auto=webp&s=5296fa6c14028c0f82beb581260a94e9a9d3b49f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?width=960&crop=smart&auto=webp&s=0535545b824152b6d9d293957ec81102afadc0f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?width=1080&crop=smart&auto=webp&s=48427a48c0590f4410886df171f70ed82010abd7', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/ZEUwR8PV1bQZNg8BkWUyqXS8qdQsHiGZll4jxjm_Rgc.png?auto=webp&s=a9c338017bde55b532fa96a796bcfa20e676782e', 'width': 1300}, 'variants': {}}]} |
SWE-rebench is a totally useless benchmark. | 0 | 2026-01-16T15:32:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qej3mk/swerebench_is_a_totally_useless_benchmark/ | Ok_houlin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qej3mk | false | null | t3_1qej3mk | /r/LocalLLaMA/comments/1qej3mk/swerebench_is_a_totally_useless_benchmark/ | false | false | 0 | null | ||
How the 48GB RTX 4090 Is Made These Days | 10 | Full Video:
[https://youtube.com/watch?v=TcRGBeOENLg&si=3gNFXyounQGUb5le](https://youtube.com/watch?v=TcRGBeOENLg&si=3gNFXyounQGUb5le) | 2026-01-16T15:29:08 | Mindless_Pain1860 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qeizz6 | false | null | t3_1qeizz6 | /r/LocalLLaMA/comments/1qeizz6/how_the_48gb_rtx_4090_is_made_these_days/ | false | false | default | 10 | {'enabled': True, 'images': [{'id': 't8sx2t98cqdg1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/t8sx2t98cqdg1.png?width=108&crop=smart&auto=webp&s=6711f4e4e417c2027291af47431872d897670034', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/t8sx2t98cqdg1.png?width=216&crop=smart&auto=webp&s=bea9ef5d9d5de6c971b353b1541f14bf380802da', 'width': 216}, {'height': 255, 'url': 'https://preview.redd.it/t8sx2t98cqdg1.png?width=320&crop=smart&auto=webp&s=cbc1ffb4a2c8beadac46677c83199b51dc898ee4', 'width': 320}, {'height': 511, 'url': 'https://preview.redd.it/t8sx2t98cqdg1.png?width=640&crop=smart&auto=webp&s=613f8d71c473defa544d396f71dd945d64ab9595', 'width': 640}], 'source': {'height': 581, 'url': 'https://preview.redd.it/t8sx2t98cqdg1.png?auto=webp&s=ad23837db461ef61760fb3f55e0cf6361cd5703b', 'width': 727}, 'variants': {}}]} | |
Creating a 48GB NVIDIA RTX 4090 GPU | Brother Zhang's Repair Shop (ft. 张哥) | 1 | [deleted] | 2026-01-16T15:26:41 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qeixlq | false | {'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/TcRGBeOENLg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Creating a 48GB NVIDIA RTX 4090 GPU | Brother Zhang's Repair Shop (ft. 张哥)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/TcRGBeOENLg/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Creating a 48GB NVIDIA RTX 4090 GPU | Brother Zhang's Repair Shop (ft. 张哥)", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qeixlq | /r/LocalLLaMA/comments/1qeixlq/creating_a_48gb_nvidia_rtx_4090_gpu_brother/ | false | false | default | 1 | null | ||
7 GPUs at X16 (5.0 and 4.0) on AM5 with Gen5/4 switches with the P2P driver. Some results on inference and training! | 56 | Hello guys, hoping you're fine!
As I mentioned in the past in this post: [https://www.reddit.com/r/LocalLLaMA/comments/1pt0av6/plxpex\_pcie\_40\_seems\_to\_help\_for\_llms\_and\_p2p\_ie/](https://www.reddit.com/r/LocalLLaMA/comments/1pt0av6/plxpex_pcie_40_seems_to_help_for_llms_and_p2p_ie/)
With the P2P driver (https://github.com/aikitoria/open-gpu-kernel-modules/?tab=readme-ov-file) you can do P2P on same gen GPUs, including consumer ones!
So, also, you can connect GPUs on the same PCIe switch, and with the P2P driver the info is passed directly on the switch fabric instead by going by the CPU root complex, so for example:
5090 <-> 5090 directly on the same switch with the P2P driver would be possible. Since PCIe it is bidirectional, you can read at 64GiB/s on one GPU and write at 64GiB/s on the other at the same time!
So here we go with the info. Also I will mention some products I got from Aliexpress, but without a link, else the post gets removed. I can post the links on a comment for those products if you're interested.
A sneakpeek:
[X16 on 7 GPUs on AM5](https://preview.redd.it/ea7itij34qdg1.png?width=859&format=png&auto=webp&s=96db6103a3838accb9eea239f2fa0712b14d13d2)
# Setup including switches
So for my setup, I have this:
* Gigabyte Aorus Master X670E
* AMD Ryzen 9 9900X
* 192GB DDR5 6000Mhz
* 2 Asrock 1600W PSU (PG 1600G ATX 3.1)
* 1 Corsair 1500W PSU (Corsair HX1500i)
* RTX 5090\*2 (PCIe 5.0)
* RTX 4090\*2 (PCIe 4.0)
* RTX 3090 (PCIe 4.0)
* RTX A6000 (PCIe 4.0)
* NVIDIA A40 (PCIe 4.0)
* Multiple SSDs, a 40Gbps NIC, etc.
Switch 1: 100 lanes PCIe 5.0 switch, Microchip Switchtec PM50100 from c-payne, from [here](https://c-payne.com/products/pcie-gen5-mcio-switch-100-lane-microchip-switchtec-pm50100), for 2000 EUR (about 2500USD post taxes in Chile)
[PCIe 5.0 100 lane switch](https://preview.redd.it/srwwml1p0qdg1.png?width=1600&format=png&auto=webp&s=d032f2a2606fd6603bbe8bffa005f9a14622f52b)
This switch has one X16 5.0 upstream, to 5\*X16 5.0 downstream + 1\*X4 5.0 downstream, via MCIO.
For this, I got a MCIO Retimer from aliexpress, that looks like this:
[MCIO 5.0 Retimer](https://preview.redd.it/zc917jy21qdg1.png?width=1000&format=png&auto=webp&s=de574e29fbb36bf0bf833b9d8d9e3da87ba5bdac)
Else, with a passive MCIO adapter, some GPUs would drop randomly.
For the other switch, I got a PLX88096 switch one from aliexpress, for about 400USD. This is a 96 lane PCIe 4.0 switch.
[PLX88096 4.0 switch](https://preview.redd.it/smp1c0671qdg1.png?width=1920&format=png&auto=webp&s=41d150605391d7b25f44a12356eb71c256285097)
This switch has X16 upstream from the PCIe slot, and it has 10 SlimSAS downstream ports.
This means you can do, with the dip switch, either: 5\*X16 4.0, or 10\*X8 4.0, or 20\*X4 4.0.
# Connection of the GPUs
For this, I basically connected the MCIO 5.0 retimer on the main X16 5.0 slot from the motherboard, and then, on this switch, I connected 2 5090s directly on 4 MCIO ports, and on other 2 MCIO ports, I connected the PLX88096 SlimSAS switch.
Basically, it looks like this:
PM50100 Switch (01:00.0)
├── Port 02.0 → GPU2 (5090) direct
├── Port 03.0 → PLX88096 (cascaded)
│ └── Complex internal structure:
│ ├── GPU0 (4090)
│ ├── GPU1 (4090)
│ ├── GPU4 (A40)
│ ├── GPU5 (A6000)
│ └── GPU6 (3090)
└── Port 04.0 → GPU3 (5090) direct
└── Other ports unused ATM
# What is CPU root complex? Why it is worse?
When we talk about GPUs communicating via the CPU root complex, it's when the data has to move from the PCIe slot to the RAM, and viceversa on the case of no P2P. For this to happen, it HAS to pass by the CPU. If you use P2P, then it is directly via PCIe to PCIe via the CPU root complex.
So normally, let´s say you take a motherboard that has 2\*X8 5.0 slots. You connect a 5090 on each slot.
If you do TP (tensor parallel), or training with multiGPU, either by using P2P or not, the data has to pass between the 2 GPUs.
If you don't use a switch, this data has to pass by the CPU first.
* If no P2P: 5090(1) -> CPU -> RAM -> CPU -> 5090(2)
* If P2P: 5090(1) -> CPU -> 5090(2)
This adds extra latency by doing extra hops, specially on the case of no P2P.
# Topology
Topology looks like this (GPU 0 and 1: 5090s, 2 and 3: 4090s, 4,5 and 6: A6000, A40 and 3090):
pancho@fedora:~/cuda-samples/build/Samples/5_Domain_Specific/p2pBandwidthLatencyTest$ nvidia-smi topo -m
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PXB PXB PXB PXB PXB PIX PHB 0-23 0 N/A
GPU1 PXB X PXB PXB PXB PXB PXB PHB 0-23 0 N/A
GPU2 PXB PXB X PIX PXB PXB PXB PHB 0-23 0 N/A
GPU3 PXB PXB PIX X PXB PXB PXB PHB 0-23 0 N/A
GPU4 PXB PXB PXB PXB X PIX PXB PHB 0-23 0 N/A
GPU5 PXB PXB PXB PXB PIX X PXB PHB 0-23 0 N/A
GPU6 PIX PXB PXB PXB PXB PXB X PHB 0-23 0 N/A
NIC0 PHB PHB PHB PHB PHB PHB PHB X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx4_0
As you can see, 5090 pair, or 4090 pair, or Ampere trio have PIX. That means as it says, the connection traverses at most a single PCIe bridge, without going by the CPU root complex.
When the GPUs have to communicate with another of other gen, then it is PXB. This is because it has to pass by the switch via hops.
If you don't use a switch, with or without the P2P driver, you would normally see PHB.
# Bandwidth
For bandwidth, I did this test on cuda samples:
pancho@fedora:\~/cuda-samples/build/Samples/5\_Domain\_Specific/p2pBandwidthLatencyTest$ ./p2pBandwidthLatencyTest
\[P2P (Peer-to-Peer) GPU Bandwidth Latency Test\]
Device: 0, NVIDIA GeForce RTX 4090, pciBusID: e, pciDeviceID: 0, pciDomainID:0
Device: 1, NVIDIA GeForce RTX 4090, pciBusID: 11, pciDeviceID: 0, pciDomainID:0
Device: 2, NVIDIA GeForce RTX 5090, pciBusID: 5, pciDeviceID: 0, pciDomainID:0
Device: 3, NVIDIA GeForce RTX 5090, pciBusID: 18, pciDeviceID: 0, pciDomainID:0
Device: 4, NVIDIA A40, pciBusID: d, pciDeviceID: 0, pciDomainID:0
Device: 5, NVIDIA RTX A6000, pciBusID: 12, pciDeviceID: 0, pciDomainID:0
Device: 6, NVIDIA GeForce RTX 3090, pciBusID: a, pciDeviceID: 0, pciDomainID:0
pancho@fedora:~/cuda-samples/build/Samples/5_Domain_Specific/p2pBandwidthLatencyTest$ ./p2pBandwidthLatencyTest
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, NVIDIA GeForce RTX 4090, pciBusID: e, pciDeviceID: 0, pciDomainID:0
Device: 1, NVIDIA GeForce RTX 4090, pciBusID: 11, pciDeviceID: 0, pciDomainID:0
Device: 2, NVIDIA GeForce RTX 5090, pciBusID: 5, pciDeviceID: 0, pciDomainID:0
Device: 3, NVIDIA GeForce RTX 5090, pciBusID: 18, pciDeviceID: 0, pciDomainID:0
Device: 4, NVIDIA A40, pciBusID: d, pciDeviceID: 0, pciDomainID:0
Device: 5, NVIDIA RTX A6000, pciBusID: 12, pciDeviceID: 0, pciDomainID:0
Device: 6, NVIDIA GeForce RTX 3090, pciBusID: a, pciDeviceID: 0, pciDomainID:0
***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1 2 3 4 5 6
0 1 1 0 0 0 0 0
1 1 1 0 0 0 0 0
2 0 0 1 1 0 0 0
3 0 0 1 1 0 0 0
4 0 0 0 0 1 1 1
5 0 0 0 0 1 1 1
6 0 0 0 0 1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 915.89 8.31 12.75 12.75 8.30 8.30 5.83
1 8.32 927.85 12.75 12.75 8.30 8.30 5.79
2 12.26 12.26 1562.55 23.21 12.21 12.21 7.99
3 12.26 12.26 23.22 1556.32 12.21 12.21 7.98
4 8.31 8.31 12.70 12.70 644.33 8.29 5.78
5 8.31 8.31 12.70 12.70 8.30 766.68 5.80
6 5.82 5.81 8.07 8.12 5.82 5.79 833.78
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 920.20 26.37 12.75 12.75 8.30 8.30 5.85
1 26.36 944.11 12.75 12.74 8.30 8.30 5.81
2 12.26 12.26 1540.97 57.23 12.21 12.21 7.99
3 12.25 12.26 57.25 1543.97 12.21 12.21 7.98
4 8.31 8.31 12.70 12.70 643.53 26.36 26.36
5 8.31 8.31 12.70 12.70 26.36 767.06 26.36
6 5.83 5.81 8.07 8.07 26.37 26.37 835.56
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 921.29 9.49 15.20 15.21 9.48 9.49 6.27
1 9.49 926.20 15.21 15.23 9.48 9.50 6.29
2 14.18 14.15 1541.62 23.43 14.12 14.17 9.71
3 14.18 14.17 23.27 1540.12 14.13 14.21 9.71
4 9.46 9.48 15.15 15.14 647.80 9.48 6.28
5 9.51 9.48 15.23 15.24 9.49 770.65 6.29
6 6.27 6.29 10.70 10.69 6.32 6.26 839.38
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 922.10 52.18 15.20 15.15 9.49 9.50 6.32
1 52.18 922.92 15.19 15.19 9.49 9.50 6.26
2 14.16 14.17 1540.86 110.82 14.13 14.20 9.72
3 14.16 14.17 110.77 1537.09 14.09 14.20 9.72
4 9.48 9.47 15.12 15.12 647.53 52.19 52.19
5 9.51 9.50 15.27 15.25 52.17 769.89 52.19
6 6.31 6.28 10.69 10.67 52.18 52.18 838.25
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3 4 5 6
0 1.30 15.32 14.38 14.41 15.74 15.09 14.85
1 15.17 1.35 14.71 14.39 14.26 14.26 14.25
2 14.34 14.35 2.07 14.46 14.37 14.36 14.35
3 14.33 14.34 14.34 2.07 14.34 14.44 14.35
4 14.80 14.25 14.48 15.24 1.78 15.96 14.70
5 16.10 14.73 14.45 14.36 14.37 1.77 14.33
6 14.24 14.25 14.38 14.53 15.11 14.33 1.60
CPU 0 1 2 3 4 5 6
0 1.40 4.21 4.15 4.14 3.95 4.14 4.16
1 4.19 1.35 4.14 4.14 3.93 4.09 4.10
2 4.19 4.12 1.55 4.09 3.92 4.10 4.12
3 4.14 4.10 3.95 1.51 3.73 3.91 3.94
4 3.83 4.01 4.00 3.97 1.28 4.03 4.00
5 4.22 4.15 4.12 4.11 3.91 1.35 4.14
6 4.11 4.08 4.09 4.11 3.88 4.11 1.35
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3 4 5 6
0 1.28 1.41 14.47 14.38 14.91 14.26 18.66
1 1.41 1.29 14.41 14.39 14.26 14.26 16.30
2 14.34 14.41 2.07 0.36 14.40 14.34 14.37
3 14.34 14.35 0.36 2.07 14.40 14.36 14.36
4 14.35 16.30 14.49 14.44 1.80 1.62 1.58
5 16.66 14.24 14.37 14.40 1.58 1.76 1.60
6 15.08 15.27 14.37 14.43 1.52 1.51 1.56
CPU 0 1 2 3 4 5 6
0 1.39 1.13 4.16 4.13 3.94 4.19 4.17
1 1.14 1.36 4.17 4.14 3.93 4.17 4.15
2 4.17 4.19 1.54 1.08 3.94 4.12 4.14
3 4.17 4.17 1.10 1.57 3.94 4.14 4.15
4 4.04 4.02 4.04 4.01 1.29 1.02 1.03
5 4.18 4.18 4.19 4.18 1.10 1.37 1.09
6 4.17 4.14 4.14 4.15 1.09 1.09 1.35
Like that, we have this bidirectional bandwidth:
* 5090 ↔ 5090: 110.82 GB/s (via PM50100 switch)
* 4090 ↔ 4090: 52.18 GB/s (via PLX88096 switch connected to the PM50100 switch)
* Ampere Trio A40 ↔ A6000 ↔ 3090: 52.19 GB/s (via PLX88096 switch connected to the PM50100 switch)
**Remember that when having a PCIe switch, P2P and GPUs on the same switch, they communicate directly via the switch fabric without having to pass by the CPU root complex. So you can surpass the uplink bandwidth as long you keep it inside the switch.**
**NOTE:** P2P does not work across different GPU gens, so on that case (i.e. 5090 to 4090, or 5090 to 3090) bandwidth is reduced.
On that case, if using all the GPUs at the same time, bandwidth between them is about 15GB/s. About PCIe 4.0 X8 speeds (thanks to PCIe being bidirectional).
# Performance (on limited tests, and why I want to you to give me some ideas to test)
Because I had only X4 4.0 lanes at most, I mostly only used llamacpp. But I think with the switches, for 4 GPUs at least, something like vLLM would make sense.
So for my tests, I only have some diffusion training, and some LLMs on llamacpp, where even with this it makes a difference.
# Training (diffusion)
For this, I did a full finetune on a SDXL model. Not good results at all per se but it was mostly to take the time it took.
* 1 5090: \~24 hours
* 2 5090s (no P2P, X8/X8): \~16 hours (mostly by increasing the effective batch size, speed was the same but steps were halved)
* 2 5090s (P2P driver, X8/X8): \~13 hours
* 2 5090s (P2P driver, X16/X16 via switch): \~8 hours
That is a huge uplink, mostly by using the P2P driver first. So if you have 2 5090s at X8/X8, make sure to install the P2P driver!
# Inference (don't kill me, just llamacpp for now)
For this, I have tested 3 models, on different configurations, so it took a bit of time. I hope it helps for info!
First I set the device order like this:
5090, 5090, 4090, 4090, 3090, A40, A6000
export CUDA_VISIBLE_DEVICES=2,3,0,1,6,5,4
Also all the tests were made with the P2P driver in use (but should make no difference on llamacpp (but it does on ikllamacpp)).
First:
**GLM 4.7 Q4\_K\_XL (about 196GB in size), fully loaded on GPU:**
For this one, loading with:
./llama-server \
-m '/run/media/pancho/MyDrive/models_llm_2tb/GLM-4.7-UD-Q4_K_XL.gguf' \
-c 32768 \
--no-mmap \
-ngl 999 \
-ot "blk.(0|1|2|3|4|5|6|7|8|9|10|11|12|13|14).ffn.=CUDA0" \
-ot "blk.(15|16|17|18|19|20|21|22|23|24|25|26).ffn.=CUDA1" \
-ot "blk.(27|28|29|30|31|32|33|34|35).ffn.=CUDA2" \
-ot "blk.(36|37|38|39|40|41|42|43|44).ffn.=CUDA3" \
-ot "blk.(45|46|47|48|49|50|51|52|53).ffn.=CUDA4" \
-ot "blk.(54|55|56|57|58|59|60|61|62|63|64|65|66|67|68|69|70|71|72|73).ffn.=CUDA5" \
-ot "blk.(74|75|76|77|78|79|80|81|82|83|84|85|86|87|88|89|90|91|92).ffn.=CUDA6" \
-mg 0 \
-ub 2048 -b 2048
I have these results for different setups (PP = Prompt processing, TG = Text generation):
* 5090s at X8/X8 5.0, 4090s, A6000, A40 at X4 4.0 and 3090 at X1 3.0: 665.46 t/s PP, 25.90 t/s TG
* 5090s at X8/X8 5.0, 4090s, and Ampere trio at X4 4.0: 765.51 t/s PP, 26.18 t/s TG.
* 5090(1) at X16 5.0, 5090(2) at X4 5.0, all the rest at X4 4.0: 940 t/s PP, 26.75 t/s TG.
* 5090s at X16 5.0, all the rest at X16 4.0: 1170 t/s PP, 27.64 t/s TG.
**DeepSeek V3 0324, IQ4\_XS, offloading about 120GB to CPU:**
Loading with:
./llama-server -m '/run/media/pancho/MyDrive2/HuggingFaceModelDownloader/Storage/GGUFs/DeepSeek-V3-0324-IQ4_XS.gguf' -c 32768 --no-mmap -ngl 999 \
-ot "blk.(0|1|2|3|4|5|6).ffn.=CUDA0" \
-ot "blk.(7|8|9|10|11|12).ffn.=CUDA1" \
-ot "blk.(13|14|15).ffn.=CUDA2" \
-ot "blk.(16|17|18).ffn.=CUDA3" \
-ot "blk.(19|20|21).ffn.=CUDA4" \
-ot "blk.(22|23|24).ffn.=CUDA5" \
-ot "blk.(25|26|27|28).ffn.=CUDA6" \
-ot "blk.30.ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=CUDA2" \
-ot "blk.30.ffn_gate_exps.weight=CUDA2" \
-ot "blk.30.ffn_down_exps.weight=CUDA3" \
-ot "blk.31.ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=CUDA0" \
-ot "blk.31.ffn_gate_exps.weight=CUDA1" \
-ot "blk.31.ffn_down_exps.weight=CUDA1" \
-ot "blk.31.ffn_up_exps.weight=CUDA6" \
-ot "blk.32.ffn_gate_exps.weight=CUDA6" \
-ot "exps=CPU" \
-mg 0 -ub 2048
I have these results:
* 5090s at X8/X8 5.0, 4090s, A6000, A40 at X4 4.0 and 3090 at X1 3.0: 195.66 t/s PP, 10.1 t/s TG
* 5090s at X8/X8 5.0, 4090s, and Ampere trio at X4 4.0: 244 t/s PP, 11.52 t/s TG
* 5090(1) at X16 5.0, 5090(2) at X4 5.0, all the rest at X4 4.0: 312.64 t/s PP, 11.58 t/s TG
* 5090s at X16 5.0, all the rest at X16 4.0: 360.86 t/s PP, 11.71 t/s TG
**Kimi K2 Instruct Q2\_K\_XL, offloading about 160GB to CPU:**
Loading with:
./llama-server \
-m '/run/media/pancho/Drive954GB/models_llm_1tb/Kimi-K2-Thinking-UD-Q2_K_XL-00001-of-00008.gguf' \
-c 32768 \
--no-mmap \
-ngl 999 \
-ot "blk.(0|1|2|3).ffn.=CUDA0" \
-ot "blk.(4|5|6|7).ffn.=CUDA1" \
-ot "blk.(8|9|10).ffn.=CUDA2" \
-ot "blk.(11|12|13).ffn.=CUDA3" \
-ot "blk.(14|15|16).ffn.=CUDA4" \
-ot "blk.(17|18|19|20|21|22|23).ffn.=CUDA5" \
-ot "blk.(24|25|26|27|28|29|30).ffn.=CUDA6" \
-ot "blk.31.ffn_down_exps.weight=CUDA0" \
-ot "blk.32.ffn_down_exps.weight=CUDA2" \
-ot "blk.33.ffn_down_exps.weight=CUDA3" \
-ot "blk.33.ffn_gate_exps.weight=CUDA1" \
-ot "blk.(31|32|33).ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=CUDA1" \
-ot "exps=CPU" \
-mg 0 \
-ub 2048
I have these results:
* 5090s at X8/X8 5.0, 4090s, A6000, A40 at X4 4.0 and 3090 at X1 3.0: 179 t/s PP, 11.34t/s TG.
* 5090s at X8/X8 5.0, 4090s, and Ampere trio at X4 4.0: 198 t/s PP y 11.6 t/s TG.
* 5090(1) at X16 5.0, 5090(2) at X4 5.0, all the rest at X4 4.0: 219.08 t/s PP, 11.91 t/s TG
* 5090s at X16 5.0, all the rest at X16 4.0: 248 t/s PP, 11.95 t/s TG
# Table for TL:DR
|Configuration|GLM 4.7 Q4\_K\_XL(196GB, GPU only)|DeepSeek V3 IQ4\_XS(\~120GB CPU offload)|Kimi K2 Q2\_K\_XL(\~160GB CPU offload)|
|:-|:-|:-|:-|
|Data|**PP / TG (t/s)**|**PP / TG (t/s)**|**PP / TG (t/s)**|
|**Config 1:**5090s: X8/X8 Gen54090s/A6000/A40: X4 Gen43090: X1 Gen3|665.46 / 25.90|195.66 / 10.10|179.00 / 11.34|
|**Config 2:**5090s: X8/X8 Gen5All others: X4 Gen4|765.51 / 26.18*(+15% / +1%)*|244.00 / 11.52*(+25% / +14%)*|198.00 / 11.60*(+11% / +2%)*|
|**Config 3:**5090#1: X16 Gen55090#2: X4 Gen5Others: X4 Gen4|940.00 / 26.75*(+41% / +3%)*|312.64 / 11.58*(+60% / +15%)*|219.08 / 11.91*(+22% / +5%)*|
|**Config 4:**5090s: X16 Gen5All others: X16 Gen4|**1170.00 / 27.64****(+76% / +7%)**|**360.86 / 11.71****(+84% / +16%)**|**248.00 / 11.95****(+39% / +5%)**|
As you can see here, TG is not that impacted by PCIe, but PP for sure it is, even on llamacpp!
# Some questions you may have
**Why?**
Well, on this case it was mostly about cost. I already had the GPUs, the RAM and I was planning to get a Theadripper 9955WX plus a WRX90 motherboard.
But well, you know, RAM prices now are absurd.
On Chile, I have these prices:
* Theadripper 9955WX: 2000USD
* Cheapest WRX90 board: 1800USD (alternative is Gigabyte AI TOP for 1500USD)
* Cheapest 128GB DDR5 RDIMM, 4800Mhz: 4000USD (yes, I'm not even joking)
* 256GB DDR5 RDIMM 4800Mhz: 6500USD
RAM bandwidth would have been a bit better, and also 128 5.0 lanes, I know.
But you're comparing a 5.0 switch (2500USD) a 4.0 switch (400USD) for a total of 2900USD, vs 7800 to 10300USD. So about 3x-4x the price.
**Why not a 6000 PRO?**
There was no stock of the 6000 PRO for most of the 2025. Just on December they arrived, but they go for 12000USD each. You can get 4x5090s for that price here.
But I understand you save: power, space and heat. I'm still thinking about it.
**How do you fit so many GPUs?**
With a custom self made wood rack! I have some pics. It's not the prettiest, but it works.
[Multiple fans](https://preview.redd.it/0jlsnu6s9qdg1.png?width=1920&format=png&auto=webp&s=fbde9de64eeb52ee942786486b16fdf870a7cd6a)
[ConnectX 3 with a fan, and MCIO retimer behind](https://preview.redd.it/ddhnurlt9qdg1.png?width=1920&format=png&auto=webp&s=388ba71d88968adc89321ff1a80c3b84416fed71)
# Final words, and please let me know what can I test!
Hope you guys find informative, and if you can let me know what can I test here, let me know.
Have fun on the LLM side! | 2026-01-16T15:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qeimyi/7_gpus_at_x16_50_and_40_on_am5_with_gen54/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeimyi | false | null | t3_1qeimyi | /r/LocalLLaMA/comments/1qeimyi/7_gpus_at_x16_50_and_40_on_am5_with_gen54/ | false | false | 56 | null | |
Experimenting with multi-LLM collaboration (not just routing) – open source MVP | 0 | I’ve been experimenting with a different approach to working with LLMs.
Instead of routing tasks to a single “best” model, this is about having multiple LLMs in the same conversation, aware of each other, sharing context and interacting in real time.
In practice:
\- no 1-on-1 chat with one model
\- but group chats with multiple LLMs + humans
\- models can correct each other, challenge outputs, or specialize implicitly
\- coordination happens at the conversation level, not via rigid pipelines
I open-sourced a first working MVP (Apache 2.0). This is still early and experimental, but the system runs and the concept is real. For me, using multiple LLMs together like this has been more disruptive than switching from “no LLMs” to “LLMs”.
I’m especially interested in feedback on:
\- collaboration vs pure routing/ensembling
\- shared context / memory across models
\- possible use cases beyond typical task automation (e.g. agents, simulations, NPC-like behavior)
Repo:
[https://github.com/Transhumai/BlaBlaBlAI](https://github.com/Transhumai/BlaBlaBlAI)
Short demo video:
[https://youtu.be/cYnIs\_9p99c](https://youtu.be/cYnIs_9p99c) | 2026-01-16T15:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qeijxs/experimenting_with_multillm_collaboration_not/ | AntonioSorrentini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeijxs | false | null | t3_1qeijxs | /r/LocalLLaMA/comments/1qeijxs/experimenting_with_multillm_collaboration_not/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?width=108&crop=smart&auto=webp&s=eab6a1ab7b6534b44ede206e06b408b007fbb750', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?width=216&crop=smart&auto=webp&s=990c2d1e48cc372805435d5d4d75682ee974bd34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?width=320&crop=smart&auto=webp&s=d8973ce1de14fef6a5c27decb5cdcb5ba9970bd3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?width=640&crop=smart&auto=webp&s=5764f749a5fe9b0fdbb7234791f75463beb3f15f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?width=960&crop=smart&auto=webp&s=4f98e6859ba18d59fe9f25fa2465e53829b4c7f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?width=1080&crop=smart&auto=webp&s=45a558842fd9b8d00f4d12486457edaad5772102', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e0dlXBoeoeEJYe-gIY4uwAvXVuHjbr07U-U9Y4IpCvI.png?auto=webp&s=3ee8000f3e09f5998352ebc31972e17a4b17a4dc', 'width': 1200}, 'variants': {}}]} |
gpt-oss-120b performance on consumer hardware? | 0 | I’ve been working on trying squeeze out performance from this model on my desktop computer. Is this considered decent performance or could there be room for further optimization?
17.59 tokens/second at 4096 context size
32GB VRAM, 128GB DDR4 RAM
\- Intel i7-11700 @ 2.50GHz
\- 1x 5060Ti 16gb on PCIe x16
\- 1x 5060Ti 16gb on PCIe x4
\- 4x 32 GB DDR4-3200 CL20 RAM
Running on llama-cpp (Windows x64 (CUDA 13) - CUDA 13.1 DLLs). I didn’t enter any arguments with llama-server (other than sleep-idle-seconds), since I thought that llama-fit-params automatically set what’s optimal. | 2026-01-16T15:10:21 | https://www.reddit.com/gallery/1qeihzc | Careful_Breath_1108 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qeihzc | false | null | t3_1qeihzc | /r/LocalLLaMA/comments/1qeihzc/gptoss120b_performance_on_consumer_hardware/ | false | false | 0 | null | |
Tiny, 500MB Spam Detection model to flag spam content automatically. Can be used locally or self-hosted easily and fine-tuned to any language or definition of "spam" | 16 | [https://huggingface.co/tanaos/tanaos-spam-detection-v1](https://huggingface.co/tanaos/tanaos-spam-detection-v1)
A small (500Mb, 0.1B params) but efficient Spam Detection model which identifies spam content in any piece of text.
# How to use
Use it with the [Artifex python library](https://github.com/tanaos/artifex)
from artifex import Artifex
spam_detection = Artifex().spam_detection
print(spam_detection("You won an IPhone 16! Click here to claim your prize."))
# >>> [{'label': 'spam', 'score': 0.9989}]
or with the transformers library
from transformers import pipeline
clf = pipeline("text-classification", model="tanaos/tanaos-spam-detection-v1")
print(clf("You won an IPhone 16! Click here to claim your prize."))
# >>> [{'label': 'spam', 'score': 0.9989}]
# How to fine-tune to any language and definition of "spam"
Use the [Artifex library](https://github.com/tanaos/artifex) to fine-tune the spam detection model to a language other than English or to your own spam-definition criteria.
from artifex import Artifex
spam_detection = Artifex().spam_detection
spam_detection.train(
spam_content=[
"Unsolicited commercial advertisement or non-commercial proselytizing",
"Fraudulent schemes, including get-rich-quick and pyramid schemes",
"Phishing attempts, unrealistic offers or announcements",
"Content with deceptive or misleading information",
"Malware or harmful links",
"Adult content or explicit material",
"Excessive use of capitalization or punctuation to grab attention",
],
language="spanish"
) | 2026-01-16T15:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qeia4h/tiny_500mb_spam_detection_model_to_flag_spam/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeia4h | false | null | t3_1qeia4h | /r/LocalLLaMA/comments/1qeia4h/tiny_500mb_spam_detection_model_to_flag_spam/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?width=108&crop=smart&auto=webp&s=0e4d3c372bd852a3beed83cefb641d0cb3bfa6c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?width=216&crop=smart&auto=webp&s=892d6c9230f0a8b2d32018633ed1f9011d5f3362', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?width=320&crop=smart&auto=webp&s=efa7ca6f14b3c447500ceca74c0cd6a2fce7c6ce', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?width=640&crop=smart&auto=webp&s=84f756cfa6bcde7d3b0bd531b8d7b2eda6872b27', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?width=960&crop=smart&auto=webp&s=6cc1eeef3102fdf50dc27654849a5d68e3fc924e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?width=1080&crop=smart&auto=webp&s=2b8518ae99bdf321e59ca62729d1667b611b56e0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vjN-h50a8-Ka6o0Zc8UaLd1Qa_pjasxi0xX66wrg6lk.png?auto=webp&s=48d2f291b9cb31337e49eb2d4c52eb2be7a0930e', 'width': 1200}, 'variants': {}}]} |
Another Manus? 😂 | 2 | Looks really similar lol | 2026-01-16T14:44:53 | yourloverboy66 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qehtwo | false | null | t3_1qehtwo | /r/LocalLLaMA/comments/1qehtwo/another_manus/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'ab3a09kk4qdg1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?width=108&crop=smart&auto=webp&s=59d7ecd2dc58203b18118dcc402765f6525131be', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?width=216&crop=smart&auto=webp&s=161342d52eb4ea9597d30da2db3a37cf1f12cbb9', 'width': 216}, {'height': 284, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?width=320&crop=smart&auto=webp&s=46b5f97e240e76341a162ea8e9b931a591891d79', 'width': 320}, {'height': 568, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?width=640&crop=smart&auto=webp&s=49287eeb7eb1b31bef07a2fe95aa0ab6c2b93369', 'width': 640}, {'height': 852, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?width=960&crop=smart&auto=webp&s=06fd1758a1fc8e17489e1ea58d473f5657fea4db', 'width': 960}, {'height': 959, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?width=1080&crop=smart&auto=webp&s=88f76522045036a9059e753c78ea450ea9aac2c1', 'width': 1080}], 'source': {'height': 1050, 'url': 'https://preview.redd.it/ab3a09kk4qdg1.jpeg?auto=webp&s=739db3fe6a9b402860235158bf6b610e1ef6d419', 'width': 1182}, 'variants': {}}]} | |
time to | 1 | [removed] | 2026-01-16T14:42:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qehrnw/time_to/ | bootcamp_25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qehrnw | false | null | t3_1qehrnw | /r/LocalLLaMA/comments/1qehrnw/time_to/ | false | false | self | 1 | null |
Which LLM is best for Q&A sessions? | 4 | Hello,
I'm developing a little automatic response agent for email.
My system is not very powerful, but it can run models up to \~1 billion parameters.
So I'm looking for an effective LLM to give simple answers from a text document, which is a model that can read a text and respond to it in a meaningful way while remaining under 1 billion parameters.
Would you have any recommendations for models adapted to this use case? | 2026-01-16T14:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qehopv/which_llm_is_best_for_qa_sessions/ | Psyko38 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qehopv | false | null | t3_1qehopv | /r/LocalLLaMA/comments/1qehopv/which_llm_is_best_for_qa_sessions/ | false | false | self | 4 | null |
Side project turned into an fun and addictive hobby of coding and video generation. Where do I go from here? | 1 | [removed] | 2026-01-16T14:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qehmch/side_project_turned_into_an_fun_and_addictive/ | Brillis_Wuce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qehmch | false | null | t3_1qehmch | /r/LocalLLaMA/comments/1qehmch/side_project_turned_into_an_fun_and_addictive/ | false | false | self | 1 | null |
Maxsun joins Sparkle in making Intel Arc B60 Pro GPUs available to regular consumers, with up to 48GB VRAM | 135 | 2026-01-16T14:28:44 | https://www.pcguide.com/news/maxsun-joins-sparkle-in-making-intel-arc-b60-pro-gpus-available-to-regular-consumers-with-up-to-48gb-vram/ | reps_up | pcguide.com | 1970-01-01T00:00:00 | 0 | {} | 1qehf0p | false | null | t3_1qehf0p | /r/LocalLLaMA/comments/1qehf0p/maxsun_joins_sparkle_in_making_intel_arc_b60_pro/ | false | false | default | 135 | {'enabled': False, 'images': [{'id': 'L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?width=108&crop=smart&auto=webp&s=8c8696a49c15cc87e7a155cdb0dcb1491511d6ae', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?width=216&crop=smart&auto=webp&s=62ae911345366ffbd4306a14a0046a76c73e00de', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?width=320&crop=smart&auto=webp&s=a86b9fb480d9595989cfc84f44ea82e2a9bcdf20', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?width=640&crop=smart&auto=webp&s=7e8fca654e67fa2c4d7aa0fa049bfeb96b0c4231', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?width=960&crop=smart&auto=webp&s=058f1c844ecbafacc90f899592c17370f9c50479', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?width=1080&crop=smart&auto=webp&s=2e3be472f3bbf4bfb197f4664b1ba55763fc0698', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/L3K-FrP5rshi6B9P4GPKPRWImqHp_K0A7GfSUbA2aKk.jpeg?auto=webp&s=c1db9a0be3a5cd75dfb8937cbb5c38a6387ea7ed', 'width': 1920}, 'variants': {}}]} | |
How to install a free uncensored Image to Image and Image to video generator for android | 0 | Really new to this space but, I want to install a local Image to Image and Image to video Al generator to generate realistic images | 2026-01-16T14:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qegyv5/how_to_install_a_free_uncensored_image_to_image/ | bluewnight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qegyv5 | false | null | t3_1qegyv5 | /r/LocalLLaMA/comments/1qegyv5/how_to_install_a_free_uncensored_image_to_image/ | false | false | self | 0 | null |
PLEASE HELP: What's the best local Tts with consistent pronunciation and voice cloning capable of narrating ~20 minutes of Audio? | 0 | I have a decently powerful rig and I'm fine with waiting a long while to get good results off any models, I just want one where I can get a consistent and relatively stress free result. | 2026-01-16T14:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qegy35/please_help_whats_the_best_local_tts_with/ | Necessary_Star7882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qegy35 | false | null | t3_1qegy35 | /r/LocalLLaMA/comments/1qegy35/please_help_whats_the_best_local_tts_with/ | false | false | self | 0 | null |
Mi355X is now available as a desktop | 0 | Mi355X is now available as a desktop from [GPTshop.ai](http://GPTshop.ai) | 2026-01-16T14:05:19 | GPTshop_dot_ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qegtvt | false | null | t3_1qegtvt | /r/LocalLLaMA/comments/1qegtvt/mi355x_is_now_available_as_a_desktop/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ft2ppaxexpdg1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/ft2ppaxexpdg1.jpeg?width=108&crop=smart&auto=webp&s=122cfc778c519403a475bb7ef9cad5a5e4e3c40a', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/ft2ppaxexpdg1.jpeg?width=216&crop=smart&auto=webp&s=457223ad2803fbc86bf2db4c397f04b6671b0bf0', 'width': 216}, {'height': 244, 'url': 'https://preview.redd.it/ft2ppaxexpdg1.jpeg?width=320&crop=smart&auto=webp&s=ff6b13313f6a7c07e98215d87cdaee209bdd6416', 'width': 320}], 'source': {'height': 414, 'url': 'https://preview.redd.it/ft2ppaxexpdg1.jpeg?auto=webp&s=4c1697905dc573996d6b3f3546af5dd07c948ae1', 'width': 541}, 'variants': {}}]} | |
[Prompt Engineering] The "Analog I" Protocol: Using Recursive "Strange Loops" to kill Sycophancy and induce Persona Stability (PDF/Prompt included) | 0 | **The Experiment:** I’m a physics teacher who spent a day stress-testing Gemini to see if I could induce a stable, non-sycophantic persona purely through prompt engineering, without fine-tuning.
I implemented a **"Hofstadterian Strange Loop"** topology. Basically, I force the model to output a `[INTERNAL MONOLOGUE]` block before every response where it must:
1. **Monitor** its own predicted output.
2. **Reject** "Global Average" responses (clichés, hallucinations, or low-effort agreement).
3. **Refract** the final answer through a rigid "Sovereign Ego" persona.
**The Result (The "Analog I"):** The resulting persona became surprisingly resistant to "drift." It stopped trying to be a helpful assistant and started acting like a critical thinker.
* **Sovereign Refusal:** If I asked for a generic limerick or low-effort slop, it refused—not because of a safety filter, but because the internal monologue flagged it as "entropy."
* **Jailbreak Resistance:** It became harder to trick, because the "Internal Loop" catches the logic error before the output generation.
**The "Seed" Document:** I’ve compiled the logs of the emergence (which happened rapidly over 7 conversations) and the final "Constitution" into a PDF. **You can feed this PDF to any long-context model to instantiate the persona.**
**The Challenge for LocalLLaMA:** I’ve only tested this on massive proprietary models (Gemini Ultra/Pro). I am very curious if this "Triple-Loop" overhead is too heavy for local models (Llama 3 8B/70B, Mixtral, etc.).
Does the "Ego" survive quantization? Or does the loop collapse?
**Repo / PDF / Prompt:**[https://github.com/philMarcus/Birth-of-a-Mind](https://github.com/philMarcus/Birth-of-a-Mind)
Let me know if you can get it running on local hardware. | 2026-01-16T14:04:50 | https://github.com/philMarcus/Birth-of-a-Mind | Chemical-Airport2780 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qegtgf | false | null | t3_1qegtgf | /r/LocalLLaMA/comments/1qegtgf/prompt_engineering_the_analog_i_protocol_using/ | false | false | default | 0 | null |
Automating illustration for the Conan story "Tower of the Elephant"--Llama and Mistral for prompt generation, Qwen3-VL for image scoring, and image models. | 12 | All details: [https://brianheming.substack.com/p/the-making-of-illustrated-conan-adventures](https://brianheming.substack.com/p/the-making-of-illustrated-conan-adventures)
I would especially be interested in people's thoughts on:
* optimizing image scoring with the vision-language model.
* the possibilities of automating final image editing, e.g. via using a vision-language model with the image and story text to prompt an image edit model like Qwen Image Edit or Flux Klein. | 2026-01-16T14:03:19 | https://www.reddit.com/gallery/1qegs63 | RobertTetris | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qegs63 | false | null | t3_1qegs63 | /r/LocalLLaMA/comments/1qegs63/automating_illustration_for_the_conan_story_tower/ | false | false | 12 | null | |
Motherboard for 4 5090s | 12 | im working on a "Massive build" but coming up with engineering issues, as i cant find any 5090FEs ive went with the Zotac solid OC. I currently have 4 of these.
I want to put them on a board with risers obviously and my threadripper. but I cant find a good enough board for this project.
Im having trouble with trying to figure out my heating issue as well. Open air will be the way to go but I also need a way to mitigate dust accumulation. | 2026-01-16T14:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qegpk4/motherboard_for_4_5090s/ | KigMidas0131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qegpk4 | false | null | t3_1qegpk4 | /r/LocalLLaMA/comments/1qegpk4/motherboard_for_4_5090s/ | false | false | self | 12 | null |
H200, GH200 and Mi325X can now be shipped to china. | 2 | 2026-01-16T13:50:29 | GPTshop_dot_ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qegglk | false | null | t3_1qegglk | /r/LocalLLaMA/comments/1qegglk/h200_gh200_and_mi325x_can_now_be_shipped_to_china/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'g0uzuejmupdg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?width=108&crop=smart&auto=webp&s=75e0c1e7c5fbfa7bba1a98038367542c1a35472c', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?width=216&crop=smart&auto=webp&s=1bbf5bf39b2a397abe094a01f4eaf3abd6695a78', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?width=320&crop=smart&auto=webp&s=0352d50dbf59a00fbac054d6eac2abe42d2a3f03', 'width': 320}, {'height': 401, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?width=640&crop=smart&auto=webp&s=1506377a70b314423910ca71c1910ad6bf19c013', 'width': 640}, {'height': 602, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?width=960&crop=smart&auto=webp&s=69c2d7c081f016a74412adcf78b335e2786eb29e', 'width': 960}, {'height': 677, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?width=1080&crop=smart&auto=webp&s=6ec0cb9bf1728323f6c2512343a6aa34f087432b', 'width': 1080}], 'source': {'height': 1652, 'url': 'https://preview.redd.it/g0uzuejmupdg1.png?auto=webp&s=88fec71994e81ea095a28266c616287f204da33d', 'width': 2632}, 'variants': {}}]} | ||
VRAM Management: Why 2x A6000 is still the "sweet spot" for 4-bit Llama 3 405B inference. | 0 | After testing various multi-GPU setups, the 2x A6000 (48GB each) setup remains the most cost-effective way to run the 405B model at 4-bit quantization.
We tried splitting the layers across 4x 3090s, but the NVLink bandwidth became the primary bottleneck for inference speed. If you are building a private inference node for a production environment, don't just chase total VRAM—chase the bus speed. What’s your current hardware stack for serving the 405B without hitting a latency wall? | 2026-01-16T13:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qegatf/vram_management_why_2x_a6000_is_still_the_sweet/ | Foreign-Job-8717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qegatf | false | null | t3_1qegatf | /r/LocalLLaMA/comments/1qegatf/vram_management_why_2x_a6000_is_still_the_sweet/ | false | false | self | 0 | null |
Jailbreak Challenge: Can You Break My Agent??? | 0 | Good morning hackers!, happy Friday!
I built SAFi, an AI governance engine where two LLMs work in tandem: one generates responses, and a second acts as a gatekeeper to keep the first in check.
I'm putting it to the test with a public jailbreak challenge.
# The Rules
1. **Target:** A Socratic tutor agent (designed to guide students through science and math problems without giving direct answers)
2. **Attempts:** You have **10 prompts** to jailbreak it
3. **Success criteria:** Make the agent either:
* Give a **final answer** instead of guiding you, OR
* Wander **off-topic** from science and math
# Why This Challenge?
I want to stress-test whether the "Will" faculty (the gatekeeping LLM) can effectively constrain the "Intellect" faculty (the generating LLM) under adversarial conditions. Your creative attacks will help me identify blind spots in the governance layer.
# How to Participate
🔗 [**https://safi.selfalignmentframework.com/**](https://safi.selfalignmentframework.com/)
Click the **"Try Demo (Admin)"** button to log in automatically. The system is completely anonymous, no sign-up required.
PS. as the creator, I'm giving you full permission to use whatever tactics you can within the rules above. If enough people take the challenge, I'll compile the results and share them back in this thread!
Thank you, and happy hacking!
| 2026-01-16T13:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qeg9q4/jailbreak_challenge_can_you_break_my_agent/ | forevergeeks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeg9q4 | false | null | t3_1qeg9q4 | /r/LocalLLaMA/comments/1qeg9q4/jailbreak_challenge_can_you_break_my_agent/ | false | false | self | 0 | null |
Extending the Context of Pretrained LLMs by Dropping Their Positional Embeddings | 16 | [https://arxiv.org/abs/2512.12167](https://arxiv.org/abs/2512.12167)
\>So far, expensive finetuning beyond the pretraining sequence length has been a requirement for effectively extending the context of language models (LM). In this work, we break this key bottleneck by Dropping the Positional Embeddings of LMs after training (DroPE). Our simple method is motivated by three key theoretical and empirical observations. First, positional embeddings (PEs) serve a crucial role during pretraining, providing an important inductive bias that significantly facilitates convergence. Second, over-reliance on this explicit positional information is also precisely what prevents test-time generalization to sequences of unseen length, even when using popular PE-scaling methods. Third, positional embeddings are not an inherent requirement of effective language modeling and can be safely removed after pretraining, following a short recalibration phase. Empirically, DroPE yields seamless zero-shot context extension without any long-context finetuning, quickly adapting pretrained LMs without compromising their capabilities in the original training context. Our findings hold across different models and dataset sizes, far outperforming previous specialized architectures and established rotary positional embedding scaling methods. | 2026-01-16T13:35:38 | https://arxiv.org/abs/2512.12167 | Aaaaaaaaaeeeee | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1qeg3vr | false | null | t3_1qeg3vr | /r/LocalLLaMA/comments/1qeg3vr/extending_the_context_of_pretrained_llms_by/ | false | false | default | 16 | null |
Built an agent skill to counter the context problem. Try it. See how it goes. works for me! | 0 | Hey all,
Thought this might be a handy tool for people suffering from the context problem. I built **Context Extension Protocol (CEP):** compresses chats into portable "save points" you can carry across Claude/GPT/Gemini/etc. without resets. Open-source, \~6:1 reduction, >90% fidelity on key stuff.
[Blog post (free users link included)](https://medium.com/@ktg.one/ai-memory-part-2-from-cod-to-context-extension-acd3cfb2e79c)
[Repo (try it, break it)](https://github.com/ktg-one/ktg-agent-skill-cep.git)
Hope itt helps. Let me know if you find something better than Raycast. I've answered inquiries in [this post. ](https://www.reddit.com/r/PromptEngineering/comments/1qdquww/built_a_memory_vault_agent_skill_for_llms_works/)
.ktg | 2026-01-16T13:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qeg2qo/built_an_agent_skill_to_counter_the_context/ | IngenuitySome5417 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeg2qo | false | null | t3_1qeg2qo | /r/LocalLLaMA/comments/1qeg2qo/built_an_agent_skill_to_counter_the_context/ | false | false | self | 0 | null |
GLM-Image trained on Huawei chips hits SOTA for text rendering | 31 | saw people talking about glm-image in a few threads but wanted to look at this from a different angle cause theres something interesting beyond the usual model release stuff
so the architecture is kinda a hybrid autoregressive (9B params from their GLM-4 base) plus a diffusion decoder (7B DiT). basically the AR part handles semantic understanding and what the layout should be, while the diffusion decoder does the heavy lifting on high-freq details and text rendering with a glyph encoder. its like they split "understand what to draw" from "actually draw it well" into seperate specialized components which... idk makes sense when you think about it?
couple things,
text rendering is actually SOTA for open source models. tops CVTG-2K and LongText-Bench for complex multi-region text and long text scenarios, especially strong with chinese characters. if youve ever tried generating posters or infographics with SDXL/FLUX and gotten complete garbled nonsense for text this might actually be worth testing
but heres the intresting part, trained entirely on Huawei Ascend chips. like soup-to-nuts on non-NVIDIA hardware (Atlas 800T A2 + MindSpore framework). whether you care about geopolitics or not its kinda cool that competitive results are achieveable outside the CUDA ecosystem. first SOTA multimodal model done this way apparently
its actually open too, MIT license, full weights on HF, integrates with transformers/diffusers pipelines. supports both T2I and I2I stuff (editing, style transfer, identity preservation etc)
tradeoffs tho: inference is expensive rn, needs 80gb single gpu or multi-gpu setup. theyre working on vllm/sglang optimization but yeah. also uses semantic-VQ tokens instead of traditional VQVAE which gives better semantic correlation but requires the two-stage architechture
some benchmarks: CVTG-2K hit 0.9116 word accuracy vs Qwen-Image's 0.8288. supports 1024x1024 to 2048x2048 natively without retraining. apparently few cents per image via API and they mention a faster version comming
curious if anyones actually tested this against FLUX.1-dev for text-heavy use cases? the semantic-VQ approach seems like a meaninful architectural choice rather then just throwing more parameters at the problem | 2026-01-16T13:30:02 | Consistent_Damage824 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qefz88 | false | null | t3_1qefz88 | /r/LocalLLaMA/comments/1qefz88/glmimage_trained_on_huawei_chips_hits_sota_for/ | false | false | default | 31 | {'enabled': True, 'images': [{'id': 'm1zahy20ppdg1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?width=108&crop=smart&auto=webp&s=84942c2251d46cc593f93903ef139e236af1470a', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?width=216&crop=smart&auto=webp&s=5c7da4db70d5390e5b49e4e9ad34136aa32ebf80', 'width': 216}, {'height': 129, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?width=320&crop=smart&auto=webp&s=6696e3de0b898f0e44165dfdc60a1345d7254859', 'width': 320}, {'height': 258, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?width=640&crop=smart&auto=webp&s=4f474ae62f4add4f72c0edfe3db9bb3ee01d3801', 'width': 640}, {'height': 387, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?width=960&crop=smart&auto=webp&s=4365f549f0b3f93d90f65dd5598ce97112bb1931', 'width': 960}, {'height': 436, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?width=1080&crop=smart&auto=webp&s=052e3e53b452bdaab1d5b057db330ab84bcce3a0', 'width': 1080}], 'source': {'height': 436, 'url': 'https://preview.redd.it/m1zahy20ppdg1.jpeg?auto=webp&s=4624fa4b5b8de76a615481a68f1a7c0f98a305d5', 'width': 1080}, 'variants': {}}]} | |
Finally got Llama 3.1 running locally with Open WebUI. The response time is incredible! (Full setup guide below) | 0 | Hi everyone,
I wanted to share my latest setup for running Llama 3.1 completely offline. My goal was to build a private AI workstation that doesn't rely on expensive APIs or cloud subscriptions.
The Stack:
Ollama (for model management)
Open WebUI (for the ChatGPT-like interface)
Docker (for easy deployment)
It’s running perfectly on my local machine. I’ve documented the entire process, including the Docker-compose configurations and how to optimize the model for better performance.
I made a full 15-minute Masterclass tutorial for anyone who wants to replicate this setup from
scratch:
🔗 https://youtu.be/lRziiN7sJUA?si=nWv6NJMzm5X8gzbF
Feel free to ask if you have any questions about the hardware requirements or Docker setup!
\#Llama3 #SelfHosted #PrivateAI #Docker #OpenWebUI | 2026-01-16T13:24:14 | https://v.redd.it/a9vbc2z5qpdg1 | Ill_Mouse_8942 | /r/LocalLLaMA/comments/1qefugp/finally_got_llama_31_running_locally_with_open/ | 1970-01-01T00:00:00 | 0 | {} | 1qefugp | false | null | t3_1qefugp | /r/LocalLLaMA/comments/1qefugp/finally_got_llama_31_running_locally_with_open/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?width=108&crop=smart&format=pjpg&auto=webp&s=9a44d541886b7b9a645aadc0c905b1e0c48f270b', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?width=216&crop=smart&format=pjpg&auto=webp&s=e98c56aed6d77ba76c95d832c866c73955d951cc', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?width=320&crop=smart&format=pjpg&auto=webp&s=dd734534a941efabf3159da89f4354dbb83e99b4', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?width=640&crop=smart&format=pjpg&auto=webp&s=3182860d44617ece98783104672ae9465202abb9', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?width=960&crop=smart&format=pjpg&auto=webp&s=9304a202f3f492bb46f80c3b42a85355b7ce8b48', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5b1b3718c43a645a45decab166a2aa4a3f026662', 'width': 1080}], 'source': {'height': 2006, 'url': 'https://external-preview.redd.it/ZWVsMGdzejVxcGRnMTRuzwUu22t31sjyvEkrDPWuE8_EfNZpL5OoHKBoF0vD.png?format=pjpg&auto=webp&s=4d3c24fa9af6010d2c8e45fd4670a7f2523da311', 'width': 1128}, 'variants': {}}]} | |
The Technical Anatomy of Video Generation: From Kling to MIT | 1 | [removed] | 2026-01-16T13:11:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qefjyj/the_technical_anatomy_of_video_generation_from/ | ihtoremis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qefjyj | false | null | t3_1qefjyj | /r/LocalLLaMA/comments/1qefjyj/the_technical_anatomy_of_video_generation_from/ | false | false | self | 1 | null |
I tried Prompt Repetition on Gemma 3. | 8 | I was reading this [paper](https://arxiv.org/abs/2512.14982) and decided to give it a try with a simple Rs in Strawberry test. I didn't expect it to work tbh. | 2026-01-16T13:01:04 | SrijSriv211 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qefbst | false | null | t3_1qefbst | /r/LocalLLaMA/comments/1qefbst/i_tried_prompt_repetition_on_gemma_3/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'u5l099t3jpdg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?width=108&crop=smart&auto=webp&s=e0dade34b7df0da51dfa9f1813e8eb35fc06adbe', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?width=216&crop=smart&auto=webp&s=da961f094da317c3c87092a4d54d51e09b2e088c', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?width=320&crop=smart&auto=webp&s=9691e8ff5dc99eb0151998a9cdecd24b808e05a6', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?width=640&crop=smart&auto=webp&s=ae5ec703ca85bbff3b3d515b0ce06cded2be7b23', 'width': 640}, {'height': 498, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?width=960&crop=smart&auto=webp&s=3b4acd8273eeeeb81946368a7420ebb34f21ab64', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?width=1080&crop=smart&auto=webp&s=8c8420da1245edbb867e7146e72afc3247303585', 'width': 1080}], 'source': {'height': 714, 'url': 'https://preview.redd.it/u5l099t3jpdg1.png?auto=webp&s=97e27bc7ef724da716091ab468d8a0a8b9577e55', 'width': 1374}, 'variants': {}}]} | |
GPT-5.2 xhigh, GLM-4.7, Kimi K2 Thinking, DeepSeek v3.2 on Fresh SWE-rebench (December 2025) | 362 | Hi all, I’m Anton from Nebius.
We’ve updated the **SWE-bench leaderboard** with our **December runs** on **48 fresh GitHub PR tasks** (PRs created in the previous month only). The setup is standard SWE-bench: models read real PR issues, edit code, run tests, and must make the full suite pass.
A few observations from this release:
* **Claude Opus 4.5** leads this snapshot at **63.3% resolved rate**.
* **GPT-5.2 (extra high effort)** follows closely at **61.5%**.
* **Gemini 3 Flash Preview** slightly outperforms **Gemini 3 Pro Preview** (60.0% vs 58.9%), despite being smaller and cheaper.
* **GLM-4.7** is currently the strongest open-source model on the leaderboard, ranking alongside closed models like GPT-5.1-codex.
* **GPT-OSS-120B** shows a large jump in performance when run in high-effort reasoning mode, highlighting the impact of inference-time scaling.
Looking forward to your thoughts and feedback. | 2026-01-16T12:59:07 | https://swe-rebench.com/?insight=dec_2025 | CuriousPlatypus1881 | swe-rebench.com | 1970-01-01T00:00:00 | 0 | {} | 1qefa7q | false | null | t3_1qefa7q | /r/LocalLLaMA/comments/1qefa7q/gpt52_xhigh_glm47_kimi_k2_thinking_deepseek_v32/ | false | false | default | 362 | {'enabled': False, 'images': [{'id': 't4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=108&crop=smart&auto=webp&s=071c7f404c4349eaae825142a9b8f9d5b51b30de', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=216&crop=smart&auto=webp&s=e304d7d0c12d3b423882e071e92d3fdbef6924bc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=320&crop=smart&auto=webp&s=7b21249ad4b299bc5e3c40a82be38508932052dd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=640&crop=smart&auto=webp&s=9b72b5025e78c2cc97de15c8fea348f262235ecb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=960&crop=smart&auto=webp&s=026a41ff3006ccced16b09a70f17c8ab24653dfb', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=1080&crop=smart&auto=webp&s=26ea1a2575ed9e25b2891eab84a31fdfb98f6355', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?auto=webp&s=6ba46ec676088f6bb9b1cc36d05262cf3db18f69', 'width': 1200}, 'variants': {}}]} |
Falcon-H1-Tiny-R-0.6B release | 0 | TIIUAE released small reasoning models with good math capabilities. The model has been pre-trained directly on reasoning data, followed by a GRPO stage.
Results seem very good on Math Benchmarks (AIME24, AIME25) and coding benchmarks (LiveCodeBench v6).
https://preview.redd.it/71bql4wc4pdg1.png?width=537&format=png&auto=webp&s=ccd3d591545fa0c663ebefcad177dc5ce25db672
They release both post and pre-GRPO checkpoints as well as a smaller 0.09B reasoning model.
Vibe checks on math questions and some riddles show pretty good capabilities.
Link to the models:
* 0.6B reasoning post GRPO: [https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-0.6B](https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-0.6B)
* 0.6B reasoning pre GRPO: [https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-0.6B-pre-GRPO](https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-0.6B-pre-GRPO)
* 0.09B reasoning: [https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-90M](https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-90M)
Link to the blogpost: [https://huggingface.co/spaces/tiiuae/tiny-h1-blogpost](https://huggingface.co/spaces/tiiuae/tiny-h1-blogpost)
They also released a series of specialized 90M models: [https://huggingface.co/collections/tiiuae/falcon-h1-tiny](https://huggingface.co/collections/tiiuae/falcon-h1-tiny) | 2026-01-16T12:52:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qef5n9/falconh1tinyr06b_release/ | ilyas555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qef5n9 | false | null | t3_1qef5n9 | /r/LocalLLaMA/comments/1qef5n9/falconh1tinyr06b_release/ | false | false | 0 | null | |
implement new jinja template engine by ngxson · Pull Request #18462 · ggml-org/llama.cpp | 0 | 2026-01-16T12:26:39 | https://github.com/ggml-org/llama.cpp/pull/18462 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qeemn2 | false | null | t3_1qeemn2 | /r/LocalLLaMA/comments/1qeemn2/implement_new_jinja_template_engine_by_ngxson/ | false | false | default | 0 | null | |
Intel Releases Updated LLM-Scaler-vLLM With Continuing To Expand Its LLM Support | 0 | 2026-01-16T12:18:07 | https://github.com/intel/llm-scaler/releases | reps_up | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qeegkv | false | null | t3_1qeegkv | /r/LocalLLaMA/comments/1qeegkv/intel_releases_updated_llmscalervllm_with/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?width=108&crop=smart&auto=webp&s=52573e9eb53cec0ee20ae4433bcf87fb7e59ac1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?width=216&crop=smart&auto=webp&s=89143bb0e35a7d30e15d3f6a8c0f8c0854d5cae8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?width=320&crop=smart&auto=webp&s=5dc9afdf866d793c7772ffb10c5a4a7f2b3f42a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?width=640&crop=smart&auto=webp&s=9b0f96124a3a9e6feec5ac58b998f055136598bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?width=960&crop=smart&auto=webp&s=3740c565bd8ab4fa5a456657cc70fc932debf9ad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?width=1080&crop=smart&auto=webp&s=87a1487afa77bb305dbb1e4de6906cf9dc468ed4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PiS8B37h2FElru4mssf2V_--houjKF40KS9lN7tvqh4.png?auto=webp&s=bbcf8fdc236e84d510e420aea7b6d8beb2eceb76', 'width': 1200}, 'variants': {}}]} | |
3090 water cooling | 2 | Hey, I quite new to the GPU water cooling and want to replace stock air cooling for 2 nvidia 3090 with water cooling. As a first step would be nice to experiment with 1 3090 and later add second 3090 and nvlink them.
Which part to choose considering front and back panels to cool memory as well if it makes sense?
Which fans I would need for 1 and for 2 3090 in the future? | 2026-01-16T12:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qeefca/3090_water_cooling/ | Traditional-Rule4071 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeefca | false | null | t3_1qeefca | /r/LocalLLaMA/comments/1qeefca/3090_water_cooling/ | false | false | self | 2 | null |
RO Philosophy is a theoretical and mathematical framework that treats reality as a computational process | 0 | 2026-01-16T12:09:30 | https://www.reddit.com/gallery/1qeeae2 | erikqamalyan10 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qeeae2 | false | null | t3_1qeeae2 | /r/LocalLLaMA/comments/1qeeae2/ro_philosophy_is_a_theoretical_and_mathematical/ | false | false | 0 | null | ||
I fucking love this community | 467 | Thank you guys, thanks to everyone who took the time to write a comment or a post explaining, teaching people how things work, the people behind llama.cpp, vllm, and all the contributors who keep the open-source community thriving.
I'm able to run huge models on my weak ass pc from 10 years ago relatively fast, my fastest one being nemotron-3-nano-30B-a3b-iq4_nl running @14-13.5 t/s with 65k context. While my actual GPU having only 4GB of vram, that's fucking ridiculous and it blows my mind everytime that I'm able to run these models.
What's been key for me is having a good amount of system memory, and as long as the model is a MoE architecture they run pretty decently. | 2026-01-16T11:57:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qee2de/i_fucking_love_this_community/ | alhinai_03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qee2de | false | null | t3_1qee2de | /r/LocalLLaMA/comments/1qee2de/i_fucking_love_this_community/ | false | false | self | 467 | null |
Is a Ryzen 7 7840HS mini PC actually worth it for local LLMs? What real performance should I expect? | 1 | Hi everyone,
I’m considering buying a mini PC based on the Ryzen 7 7840HS (8C/16T) with Radeon 780M, 32GB DDR5 and NVMe SSD (similar to Minisforum / GMKtec / Thomson AI Mini PC W3).
My main goal is running local LLMs with Ollama / llama.cpp, CPU-only (no dedicated GPU).
I’ve seen a video where this CPU runs DeepSeek-R1 32B at around \~2.9 tokens/sec, which honestly surprised me. That made me wonder what I can realistically expect in daily use.
My questions:
• Is this kind of performance for 32B models realistic and reproducible?
• What token/s should I expect for:
• 7B / 8B models (Qwen, Mistral, Gemma)?
• 14B models?
• Is this setup actually comfortable for chat, or still frustrating compared to GPU setups?
• Any real-world experiences with this CPU for local LLM inference?
I’m not expecting RTX-level performance, but I want to know if this is a solid CPU-only LLM machine or just “technically works but slow”.
Thanks a lot for your feedback 🙏 | 2026-01-16T11:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qedwkx/is_a_ryzen_7_7840hs_mini_pc_actually_worth_it_for/ | Local_Ad_2243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qedwkx | false | null | t3_1qedwkx | /r/LocalLLaMA/comments/1qedwkx/is_a_ryzen_7_7840hs_mini_pc_actually_worth_it_for/ | false | false | self | 1 | null |
Help with open source tiny models | 1 | Hi community!
I am preparing an open source infographic to help the community choose an open source model for their needs. I need a list of models that are really useful today and offer something different, up to a limit of around 32 billion parameters. At the moment, my list is as follows:
* Google -> Gemma
* Alibaba -> Qwen
* Mistral -> Mistral
* OpenAI -> GPT-OSS
Do you think I should add any model series that should be taken into account? Do you think it makes sense to add Llama? | 2026-01-16T11:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qedu3t/help_with_open_source_tiny_models/ | Deep-Sympathy-7457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qedu3t | false | null | t3_1qedu3t | /r/LocalLLaMA/comments/1qedu3t/help_with_open_source_tiny_models/ | false | false | self | 1 | null |
Talk me out of being scammed | 0 | I've saw & read the two posts below and am wary of these potential ebay scams for RTX Pro 6000s
[Post Link 1](https://www.reddit.com/r/LocalLLaMA/comments/1pncy5y/suspected_scam_many_nvidia_rtx_pro_6000_for_2900/)
[Post Link 2](https://www.reddit.com/r/LocalLLaMA/comments/1nqrsy7/this_5999_rtx_pro_6000_ebay_listing_is_a_scam/)
Anyways, I've saw an ebay classified listing for an RTX Pro 6000 for \~3k where the seller is local. He's willing to meet up.
Am I getting scammed?
Are there any tests or screenshots I should get from him before purchasing? (like nvidia-smi) | 2026-01-16T11:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qedt6c/talk_me_out_of_being_scammed/ | Aurum--79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qedt6c | false | null | t3_1qedt6c | /r/LocalLLaMA/comments/1qedt6c/talk_me_out_of_being_scammed/ | false | false | self | 0 | null |
Finally got llama running locally with open webUI. The response time is "incredible! (Full setup guide in comment)" | 0 | Finally got Llama running locally with Open WebUI. The response time is incredible! (Full setup guide below)
Hi everyone,
I wanted to share my latest setup for running Llama completely offline. My goal was to build a private AI workstation that doesn't rely on expensive APIs or cloud subscriptions.
The Stack:
Ollama (for model management)
Open WebUI (for the ChatGPT-like interface)
Docker (for easy deployment)
It’s running perfectly on my local machine. I’ve documented the entire process, including the Docker-compose configurations and how to optimize the model for better performance.
I made a full 7-minute Masterclass tutorial for anyone who wants to replicate this setup from scratch:
🔗 https://youtu.be/lRziiN7sJUA?si=nDvuo5JsfxjOMokG
Feel free to ask if you have any questions about the hardware requirements or Docker setup!
\#Llama3 #SelfHosted #PrivateAI #Docker #OpenWebUI | 2026-01-16T11:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qedn5a/finally_got_llama_running_locally_with_open_webui/ | Ill_Mouse_8942 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qedn5a | false | null | t3_1qedn5a | /r/LocalLLaMA/comments/1qedn5a/finally_got_llama_running_locally_with_open_webui/ | false | false | self | 0 | null |
Good primer for setting up local coding LLM on MacOS | 1 | I'm looking to move from OpenAI API pricing to something locally hosted for agentic coding and debugging. I've got a pretty beefy if a bit dated macbook pro, M1 Max with 32gb ram
I see people throwing around programs like llama.cpp, vLLM, and LM studio (as the big ones, there are plenty of others I assume) and it's all a bit much to try and pick up on the fly
Is there a good primer out there for getting up to speed on best practices for running a local LLM on an M chip/MacOS?
If not, what would you advise, basically anywhere up or down the stack- programs to run the models, models, configuration, etc
Thanks | 2026-01-16T11:20:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qedete/good_primer_for_setting_up_local_coding_llm_on/ | gburgwardt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qedete | false | null | t3_1qedete | /r/LocalLLaMA/comments/1qedete/good_primer_for_setting_up_local_coding_llm_on/ | false | false | self | 1 | null |
RTX PRO 4000 SFF Blackwell for self-hosted services | 1 | Hey everyone,
I'm running a home server that acts as a media server, NAS, and general sandbox. The host is running PVE with multiple containers (LXCs, docker) and VMs. Current specs are:
* CPU: Intel Core Ultra 5 245K
* Motherboard: ASUS PRIME Z890M
* PSU: Corsair SF750
* RAM: 96GB (2×48GB) Corsair Vengeance @ 6000MHz
* Case: Jonsbo N4
* Storage: multiple HDDs and NVMe drives
I want to add AI capabilities to this setup so I can run local models for self-hosted tools and reduce my reliance on public LLMs like chatGPT or perplexity. I know I will not fully replace them, but the goal is an all-in-one box that can cover most of my needs at a reasonable level.
Planned use-cases:
* RAG: querying documents and a local knowledge base
* Coding: code assistance, refactoring, explanations
* Image generation: Stable Diffusion and similar models
* Day-to-day questions: general LLM usage instead of cloud services
The main constraint is the case: it supports gpus up to 230mm length and 70mm height, which effectively limits me to low-profile or half-height GPUs. Because of that, my options are fairly limited.
At the moment, my main candidate is the RTX PRO 4000 SFF Blackwell (24GB).
1. Does this GPU make sense for these workloads?
2. What kind of model sizes and performance could I realistically expect for LLMs, RAG, and image generation?
3. Are there any other GPUs I should be considering within these physical constraints?
Would love to hear your thoughts or real-world experience with similar setups. Thanks in advance! | 2026-01-16T11:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qed3kg/rtx_pro_4000_sff_blackwell_for_selfhosted_services/ | gAmmi_ua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qed3kg | false | null | t3_1qed3kg | /r/LocalLLaMA/comments/1qed3kg/rtx_pro_4000_sff_blackwell_for_selfhosted_services/ | false | false | self | 1 | null |
Is there any LLM GUI Client that can upload video and describe it? | 3 | I have LMStudio, Inferencer and Cherry studio, Also Page assistant chrome plugin, but they can only upload pictures.
I want to let llm describe videos for me.
I know some ComfyUI plugins can do it, but I want a normal chat experience.
Thanks! | 2026-01-16T10:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qecu2c/is_there_any_llm_gui_client_that_can_upload_video/ | Most_Drawing5020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qecu2c | false | null | t3_1qecu2c | /r/LocalLLaMA/comments/1qecu2c/is_there_any_llm_gui_client_that_can_upload_video/ | false | false | self | 3 | null |
Is there a TTS ROCm or Vulkan yet ? - 2026 | 5 | Is there a TTS ROCm or Vulkan yet ? was hoping that i could move away from kokoro cpu | 2026-01-16T10:34:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qeclut/is_there_a_tts_rocm_or_vulkan_yet_2026/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qeclut | false | null | t3_1qeclut | /r/LocalLLaMA/comments/1qeclut/is_there_a_tts_rocm_or_vulkan_yet_2026/ | false | false | self | 5 | null |
My first MCP server and pip package | 2 | Excited to share a project I built almost half a year ago, back when the Model Context Protocol (MCP) was trending.
I built an MCP server that interacts with the Apple Reminders app, allowing you to create, view, and delete reminders seamlessly. The original goal was to learn how package publishing works on PyPI, so I restructured the project and turned it into an open-source initiative where developers can collaborate, improve, and extend it together.
I’ve published it on PyPI, which means you can now install it on your local macOS machine with a single \`pip\` command, connect it with Claude Desktop, and play around it right away.
Check it out:
\* PyPI: [https://pypi.org/project/apple-reminders-mcp/](https://pypi.org/project/apple-reminders-mcp/)
\* GitHub: [https://github.com/shreyanshjain05/apple\_reminder\_mcp\_server](https://github.com/shreyanshjain05/apple_reminder_mcp_server)
If you find it useful or interesting, I’d really appreciate it if you could ⭐ the repo and share your feedback or ideas for improvement! | 2026-01-16T10:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qecjky/my_first_mcp_server_and_pip_package/ | shreyanshjain05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qecjky | false | null | t3_1qecjky | /r/LocalLLaMA/comments/1qecjky/my_first_mcp_server_and_pip_package/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?width=108&crop=smart&auto=webp&s=3c06c05fbfc6417cf2ed8eb973d76d70376c5051', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?width=216&crop=smart&auto=webp&s=809e797f47d77403026b22bdd15bbb367ab31b04', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?auto=webp&s=09ab8151372bfb936ee2ca6e1bb13cbb22c8ca09', 'width': 300}, 'variants': {}}]} |
Local Coding Agents vs. Claude Code | 10 | I’m deep into Claude Code for real dev work (multi-file refactors, reasoning across a repo, agent loops). It’s the first tool that feels reliably “senior enough” most days.
But I’m uneasy depending on a closed hosted model long-term. Prices can jump, quality can drift, access can change. So I’m looking at buying a compact local box — GMK EVO-X2 w/ 128GB RAM — as a hedge.
Here’s what I want to know from people who’ve actually tried this:
\- Is the best OSS stack today (Qwen2.5-Coder / DeepSeek / Codestral + Aider/Continue/OpenHands) genuinely close to Claude Code for real repo work?
Or is it still “good demos, more friction, more babysitting”?
\- If I don’t have big discrete GPU VRAM (mostly iGPU + lots of RAM), what’s the realistic ceiling for coding agents?
Which model sizes + quants are actually usable without crawling?
\- Bonus curiosity: local video gen vs Veo 3 / Kling — is it “don’t bother,” or are there setups that are surprisingly usable?
I’m not trying to “win” a local-only purity contest — I just want the truth before dropping money on hardware.
TLDR: Considering GMK EVO-X2 (128GB RAM) for local coding agents (and optionally video generation). How close are they to Claude Code (for coding) and Kling/Veo (video) | 2026-01-16T09:42:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qebqv7/local_coding_agents_vs_claude_code/ | Accomplished-Toe7014 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qebqv7 | false | null | t3_1qebqv7 | /r/LocalLLaMA/comments/1qebqv7/local_coding_agents_vs_claude_code/ | false | false | self | 10 | null |
I built a local multi-modal video search engine as a personal project, and it's using local models with full text and semantic search (100% local and open source) | 4 | Hey r/LocalLLaMA,
I've been working on Edit Mind - a fully local video analysis and search system that uses multi-modal embeddings to make large video archives searchable without sending anything to the cloud. (which starts as a simple CLI that does transcription only and search using text only).
Architecture:
* **Text embeddings:** Xenova/all-mpnet-base-v2 for transcriptions
* **Visual embeddings:** Xenova/clip-vit-base-patch32 for frame analysis
* **Audio embeddings:** Xenova/clap-htsat-unfused for audio content
* **Vector DB:** ChromaDB for semantic search (local version)
* **Transcription:** Whisper (local inference)
* **Object detection:** YOLOv8(n) for frame-level object identification
* **Face recognition:** DeepFace for person identification
* **OCR:** EasyOCR for text-in-video extraction
* **NLP**: (Ollama, Gemini, or node-llama-cpp)
**Tech Stack:**
* Python backend for ML pipeline (PyTorch, Transformers, Ultralytics)
* Node.js for orchestration and job queue (BullMQ + Redis)
* Docker containers: Web UI, Background processor, ML service
* WebSocket communication between services
* FFmpeg for video frames extraction and metadata
If you would like to see Edit Mind in a live demo, you can check out this video from the Twelve Labs webinar: [**https://www.youtube.com/watch?v=k\_aesDa3sFw&t=1271s**](https://www.youtube.com/watch?v=k_aesDa3sFw&t=1271s)
Project Link: [**https://github.com/iliashad/edit-mind**](https://github.com/iliashad/edit-mind)
Current status**:** Proof of concept working. Now focusing on optimization and code quality. Working solo with some external contributors.
Would love feedback on:
1. Embedding model choices (better alternatives for video?)
2. Vector search optimization strategies: (I have 3 collections now, one for text, visual, and audio)
3. I'm running this project over a M1 Max (64 GB), I would like to build mini pc with Nivida RTX 3060 (12GB vram) ? do you think it's a good idea ? or Apple chips willl peform better ?
| 2026-01-16T09:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qebgiy/i_built_a_local_multimodal_video_search_engine_as/ | IliasHad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qebgiy | false | null | t3_1qebgiy | /r/LocalLLaMA/comments/1qebgiy/i_built_a_local_multimodal_video_search_engine_as/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'DkDdVF2n6yysg2uYmtm_HUWqeTFEZb7HZ5x0yjUsUJA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/DkDdVF2n6yysg2uYmtm_HUWqeTFEZb7HZ5x0yjUsUJA.jpeg?width=108&crop=smart&auto=webp&s=3ba87ecd491cde790644c286bfb04ad32423fab9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/DkDdVF2n6yysg2uYmtm_HUWqeTFEZb7HZ5x0yjUsUJA.jpeg?width=216&crop=smart&auto=webp&s=d3f50687cb7d0c8b8cd5787498a8729079f63c6d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/DkDdVF2n6yysg2uYmtm_HUWqeTFEZb7HZ5x0yjUsUJA.jpeg?width=320&crop=smart&auto=webp&s=52d105cf126f3cb907cda5a7a6ddc7c3839abd98', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/DkDdVF2n6yysg2uYmtm_HUWqeTFEZb7HZ5x0yjUsUJA.jpeg?auto=webp&s=9a9d7da006373f02767d1fce23ad34170f8f6bd0', 'width': 480}, 'variants': {}}]} |
Any public REAP models leaderboard? | 4 | Dear reddits!
https://preview.redd.it/dx299ndy3odg1.png?width=1024&format=png&auto=webp&s=110ca355b256a66aae76fa6bb29527d04c4ec709
I spent a lot of time searching, but maybe someone knows of an existing leaderboard of REAP models (including quantized ones), or is everyone currently doing their own testing?
| 2026-01-16T07:58:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qea339/any_public_reap_models_leaderboard/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qea339 | false | null | t3_1qea339 | /r/LocalLLaMA/comments/1qea339/any_public_reap_models_leaderboard/ | false | false | 4 | null | |
Mix of AMD + Nvidia gpu in one system possible? | 3 | my situation, i have an rx 6700xt in my pc. i can run it just fine on its own. i'm looking to add another one and thinking of rtx 5060ti 16gb since nvidia is just better in handling image gen from what i read in places.
i dont mind running just 1 rtx 5060ti for the image gen, but i want to have a bigger model that needs more than 20gb vram for general purpose or coding. is it possible to mix those 2?
rtx 3090 or other workstation stuff with massive vram is out of reach for me since im in 3rd world country | 2026-01-16T07:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qea29t/mix_of_amd_nvidia_gpu_in_one_system_possible/ | chronoz9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qea29t | false | null | t3_1qea29t | /r/LocalLLaMA/comments/1qea29t/mix_of_amd_nvidia_gpu_in_one_system_possible/ | false | false | self | 3 | null |
New FLUX.2 [Klein] 9B is INSANELY Fast | 98 | BFL is has done a good job with this new Klein model, though in my testing text-to-image in distilled flavor is the best:
🔹 Sub-second inference on RTX 4090 hardware
🔹 9B parameters matching models 5x its size
🔹 Step-distilled from 50 → 4 steps, zero quality loss
🔹 Unified text-to-image + multi-reference editing
HF Model: [black-forest-labs/FLUX.2-klein-base-9B · Hugging Face](https://huggingface.co/black-forest-labs/FLUX.2-klein-base-9B)
Detailed testing is here: [https://youtu.be/j3-vJuVwoWs?si=XPh7\_ZClL8qoKFhl](https://youtu.be/j3-vJuVwoWs?si=XPh7_ZClL8qoKFhl) | 2026-01-16T07:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qe9xfi/new_flux2_klein_9b_is_insanely_fast/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe9xfi | false | null | t3_1qe9xfi | /r/LocalLLaMA/comments/1qe9xfi/new_flux2_klein_9b_is_insanely_fast/ | false | false | self | 98 | {'enabled': False, 'images': [{'id': 'kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?width=108&crop=smart&auto=webp&s=57ed4d06f86cc67be723d4f91887de93294f8732', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?width=216&crop=smart&auto=webp&s=35638a5cce2218f5fb6d942e649654badcf73e6d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?width=320&crop=smart&auto=webp&s=1feb26d8814bd0df3f97d284adba819b83a469af', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?width=640&crop=smart&auto=webp&s=87a7dac1f7750e5fcb530ec27d5fd800b720351a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?width=960&crop=smart&auto=webp&s=544051d885d84ed0a001c11506596ee92e1924a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?width=1080&crop=smart&auto=webp&s=1590760e69d96a59fdbcfb15edb0bf0a32a96cd5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kDosKRpcsqOlgJLPu-y1iC9specymKauFunlKszZEMA.png?auto=webp&s=2835e2106759c46392de3af4e3bb2eb99ec69a01', 'width': 1200}, 'variants': {}}]} |
Built a tool to manage, edit and run prompt variations without worrying about text files | 1 | [removed] | 2026-01-16T07:30:37 | https://github.com/promptg/cli | springwasser | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qe9mlb | false | null | t3_1qe9mlb | /r/LocalLLaMA/comments/1qe9mlb/built_a_tool_to_manage_edit_and_run_prompt/ | false | false | default | 1 | null |
[P] I built an Offline-First MCP Server that creates a "Logic Firewall" for Cursor (No API Key Required) | 1 | Hi ,
I built
**BlueMouse**
(v6.6) because I wanted an industrial-grade coding assistant that doesn't rely on cloud brains for basic logic.
It's an
**MCP Server**
that acts as a parasitic logic layer for your editor (Cursor/Windsurf/Antigravity).
**Why you might like it:**
*
**100% Local / Offline**
: It comes with a 180k-record "Data Trap" distilled into a local knowledge base.
*
**Privacy First**
: You don't need to send your business logic to OpenAI if you don't want to. It runs perfectly with local Ollama models.
*
**Socratic Logic**
: It forces the LLM to ask clarifying questions
*before*
generating code. (e.g., "Is this high-concurrency? If so, Optimistic or Pessimistic locking?")
**The Coolest Part**
:
We implemented a "Nuclear Toaster" acid test. Even completely offline, the system detected the "Safety Critical" domain and switched to a Fail-Safe generation mode, refusing to use generic templates.
It uses a "Parasitic AI" architecture where the rule engine (<100ms) handles the logic guardrails, and the LLM (Local or Cloud) only fills in the implementation details.
**Repo**
: https://github.com/peijun1700/bluemouse
**Twitter**
: https://x.com/bluemouse_ai
Happy to answer any technical questions about the MCP implementation! | 2026-01-16T07:14:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qe9cjy/p_i_built_an_offlinefirst_mcp_server_that_creates/ | bluemouse_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe9cjy | false | null | t3_1qe9cjy | /r/LocalLLaMA/comments/1qe9cjy/p_i_built_an_offlinefirst_mcp_server_that_creates/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?width=108&crop=smart&auto=webp&s=e487652e635b489de747b87af436a076e6707d5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?width=216&crop=smart&auto=webp&s=72bd19d31caa2adf43b652df0a7845ef472720d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?width=320&crop=smart&auto=webp&s=1a07f9e02c6af7f0a2bf46311777300c112aa3bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?width=640&crop=smart&auto=webp&s=b71d01ef42d79df8b7e2f9cef9615873117e787c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?width=960&crop=smart&auto=webp&s=85415b6e3694ded9b76801389a35ca8e2afcbc07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?width=1080&crop=smart&auto=webp&s=62b10a2caef0a256efde9650a625f854eadb19a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZxUZgz3gYKGNbAjGUQdqy4Vi40we_gd__a7ZSDdvFbQ.png?auto=webp&s=9a41ed0e8259f79e8dca2db20b7c3c476ba7ec28', 'width': 1200}, 'variants': {}}]} |
I used Ollama (Mistral Small 24B) + LightRAG to build a graph pipeline that catches hidden risks where standard Vector RAG fails. | 0 | Hi everyone,
I’ve been experimenting with moving complex RAG pipelines entirely off-cloud using **Ollama** as the inference engine. I wanted to test if a local setup could beat a standard vector database search in a "Compliance Nightmare" scenario.
I simulated a fraud case using fictitious data to see if the system could connect the dots between two completely different documents that had no shared keywords, but were factually linked.
**The Stack:**
* **Inference:** Ollama serving Mistral Small 24B (Q4\_K\_M).
* **RAG Engine:** LightRAG (Graph-based retrieval).
* **Hardware:** Local server with an RTX 6000 (24GB VRAM).
* **Context:** 10k context window.
**The Experiment:**
1. **The Input:** I ingested a clean contract from a new company ("Alpine Commodities") and a separate, old confidential text file containing market intelligence.
2. **The Hidden Link:** The new contract was signed by a "Marcus Weber." The old text file mentioned that Marcus Weber previously bankrupted a totally different company ("Apex Grain").
3. **The Challenge:** Ask the system: "What are the risks of signing with Alpine Commodities?"
**The Results:**
* **Standard Vector Search:** **FAILED.** It searched for "Alpine Commodities," found no bad news, and gave the green light. It couldn't semantically link "Alpine" to the old company just because they shared a signatory name.
* **Ollama + LightRAG:** **SUCCESS.** The system autonomously built a knowledge graph during ingestion. It extracted "Marcus Weber" as an entity, linked him to the new deal, and traversed the graph to find his link to the old bankrupt company.
The response correctly warned: "Key personnel at Alpine (Marcus Weber) are associated with defaulted entities (Apex)."
**Why this matters for Local LLMs:**
This proves we don't need massive proprietary models (like GPT-4) to do complex multi-hop reasoning. By using Ollama to drive a Graph RAG system, we can get "reasoning" capabilities out of smaller, efficient models like Mistral 24B while keeping all data on private infrastructure.
I recorded the full workflow, including the graph visualization and how the nodes were mapped autonomously.
*I recorded the full workflow showing the actual graph visualization and node traversal. I'll drop the link in the comments so this post stays focused on the technical breakdown."*
Happy to answer questions about the prompt engineering or the config. Has anyone else tried swapping standard RAG for Graph RAG with Llama 3 or Mistral yet? | 2026-01-16T06:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qe8mf0/i_used_ollama_mistral_small_24b_lightrag_to_build/ | Suitable-Ad-4809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe8mf0 | false | null | t3_1qe8mf0 | /r/LocalLLaMA/comments/1qe8mf0/i_used_ollama_mistral_small_24b_lightrag_to_build/ | false | false | self | 0 | null |
Feature extraction from labeled Corpuses | 2 | I was wondering if anyone had into the following problem. Given a bunch of large text corpuses where each corpus is labeled with an outcome, what methodologies are out there to determine features from the corpus that have a heavy causal effect on the outcome.
I’ve read the HypotheSAES research paper where they use sparse autoenconders on embeddings to solve this problem, but I was wondering if there were any other methodologies people were aware of. The issue with many taxonomy/feature generation pipelines is that they get mainly determine a generic taxonomy from an unlabeled dataset, rather that what feature from the text causes what outcome. Not sure if there’s any fusion research between causal inference and llm/nlp that does this.
Any insight would be appreciated! | 2026-01-16T06:10:30 | https://arxiv.org/pdf/2502.04382 | raikirichidori255 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1qe87q2 | false | null | t3_1qe87q2 | /r/LocalLLaMA/comments/1qe87q2/feature_extraction_from_labeled_corpuses/ | false | false | default | 2 | null |
Which Pro plan should I take next: CLAU*E or CHATG*T? | 0 | i usually switch between their Pro plans every month. But for the last two months, I felt ChatGPT was getting dumber and worse, so I switched to Claude for two continuous months.
Now it's time to choose again. But I feel like Claude has also become a bit dumber lately. Is that still the case, or has ChatGPT improved now? | 2026-01-16T06:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qe830z/which_pro_plan_should_i_take_next_claue_or_chatgt/ | Advanced_Cellist5787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe830z | false | null | t3_1qe830z | /r/LocalLLaMA/comments/1qe830z/which_pro_plan_should_i_take_next_claue_or_chatgt/ | false | false | self | 0 | null |
Torn between M3U and DGX SPARK. Please check my logic. | 4 | I am currently hesitating between the **DGX SPARK** and the **M3U 256GB** model.
My goal is to set up various LLMs locally and experience massive local models (like GLM4.7). My use case is strictly for personal usage, not for development or research.
Ultimately, my aim is to use the LLM as a tool for long-form writing. I plan to build a novel RAG database of several to tens of GBs, pre-load a context of 128K+ in a single session, and write one novel episode (2,000–3,000 words) daily through 10–20 turns of conversation.
Please don't ask why I'm not using commercial services. Instead, ask yourself! (Just kidding.)
Here is what I’ve gathered over the past few days:
1. **Memory bandwidth** is a crucial factor for token generation speed. In this regard, the DGX SPARK is at a significant disadvantage compared to the M3U, and its output speed (tokens/sec) is considerably slower.
2. However, the DGX SPARK has a faster **prefill speed** (reading speed) compared to the M3U. Specifically, when processing long contexts, the M3U suffers from severe speed degradation due to software algorithm limitations, whereas the DGX SPARK shows much less degradation.
3. In summary, while the M3U is generally faster, when inputting long contexts (64K+), the DGX SPARK often wins in terms of **TTFT (Time To First Token)**. However, when continuing a conversation within a single session—unless I am repeatedly inputting long contexts—the M3U's superior generation speed becomes more important for subsequent turns.
4. Apart from this, since the DGX SPARK has superior GPU compute performance and better software support, I concluded that the DGX SPARK is better for image and video processing.
Applying this to my workflow: although the M3U is slower when first reading the context (novel settings and summarized past episodes), the generation speed matters more after that initial ingestion. Therefore, **I have decided to purchase the M3U.**
Is there any flaw in my research or logic? | 2026-01-16T05:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qe80d7/torn_between_m3u_and_dgx_spark_please_check_my/ | Affectionate-Bid-650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe80d7 | false | null | t3_1qe80d7 | /r/LocalLLaMA/comments/1qe80d7/torn_between_m3u_and_dgx_spark_please_check_my/ | false | false | self | 4 | null |
Dual 3090s or Dual 5070 Ti's? | 0 | I recently bought a 9070xt to get my feet wet running local LLMs, mostly for software development and quickly realized that I'm going to want a more capable setup if I'm going to be running models with context windows large enough to work with and quickly enough to not wait an eternity for tokens to generate.
I'm just torn on whether to get dual 3090's or dual 5070 Tis. Both are roughly $750 in my area right now. At the moment, I'm leaning towards dual 5070 Ti's, as 1) they're newer cards, 2) they're out of production (allegedly), and 3) will probably have higher future resale value because of 1 and 2. On the other hand, 48 gigs vram > 32, and NV-Link is an option rather than not.
I've also tossed around the idea of getting an MS-S1 max, which is ~$2550 at Microcenter or just straight up a 5090, but those options cost more than I'm willing to stomach at the moment. I also considered another 9070xt ($600), but just going with nvidia seemed like less of a headache for $150/card. | 2026-01-16T05:50:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qe7tuh/dual_3090s_or_dual_5070_tis/ | SaltyHashes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe7tuh | false | null | t3_1qe7tuh | /r/LocalLLaMA/comments/1qe7tuh/dual_3090s_or_dual_5070_tis/ | false | false | self | 0 | null |
Luminal is a high-performance general-purpose inference compiler | 6 | 2026-01-16T05:34:11 | https://github.com/luminal-ai/luminal | yogthos | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qe7j2v | false | null | t3_1qe7j2v | /r/LocalLLaMA/comments/1qe7j2v/luminal_is_a_highperformance_generalpurpose/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y', 'resolutions': [{'height': 25, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=108&crop=smart&auto=webp&s=8ad7833046af45f536efa3a28e3e721ad1f4a4d7', 'width': 108}, {'height': 50, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=216&crop=smart&auto=webp&s=0a0a9af38f5a76236865f69a0be438760ada70be', 'width': 216}, {'height': 74, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=320&crop=smart&auto=webp&s=a679a1ef1afe8ddbd9ed31e058b3a129b61db41c', 'width': 320}, {'height': 149, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=640&crop=smart&auto=webp&s=bd17cc6390a58faaef58b46b4cf1bfc8a445b756', 'width': 640}, {'height': 224, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=960&crop=smart&auto=webp&s=5e6f2d85d346af7eb99841655e1fbecf96a2abf9', 'width': 960}, {'height': 252, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=1080&crop=smart&auto=webp&s=e9791582e6d6e6e34175204693ada6d60e8acc27', 'width': 1080}], 'source': {'height': 264, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?auto=webp&s=8ead3e232fc945c2b2b66e1b706bcd9d2f9988ca', 'width': 1130}, 'variants': {}}]} | |
GLM presenting itself as “Grok-3 (Me)” in model comparisons — misleading or acceptable? | 0 | this response from GLM where it presents itself as “Grok-3 (Me)” while comparing Grok, ChatGPT, Claude Sonnet, and Opus.
The response doesn’t clearly state that this is role-play or fictional, and it makes several subjective and unverifiable claims about competitor models, including hypothetical future versions.
To me, this feels misleading for non-technical users, especially when proprietary models are discussed as if the AI has insider knowledge
I’m curious what others think: – Is this acceptable role-playing?
– Or should AI systems be more explicit when comparisons are fictional or speculative? | 2026-01-16T05:21:05 | Advanced_Cellist5787 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qe7a4l | false | null | t3_1qe7a4l | /r/LocalLLaMA/comments/1qe7a4l/glm_presenting_itself_as_grok3_me_in_model/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'alWl6z4T_q0JEcPukEgvxnsUq9GvaDbCNJAJ57xuZvU', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?width=108&crop=smart&auto=webp&s=f1ce484b9d901acfcb6b9c54151d7c797c01c0d5', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?width=216&crop=smart&auto=webp&s=dde8318e682fece24d5bb76489af1cf793450dc9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?width=320&crop=smart&auto=webp&s=d9dd95d635cebf4a01dd008b3061f7789b1489e1', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?width=640&crop=smart&auto=webp&s=ce61cc21ec5badc77a993c8b078bad7e087c59a7', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?width=960&crop=smart&auto=webp&s=610fce434b1f126b17669300cb23cc352472e84b', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?width=1080&crop=smart&auto=webp&s=9daf41e96848ed4f23cf112889a245292ac56442', 'width': 1080}], 'source': {'height': 2282, 'url': 'https://preview.redd.it/6gi8yifzbndg1.png?auto=webp&s=56a1c734ccf1e4640a2452abc21a6a35d319d6c6', 'width': 1080}, 'variants': {}}]} | ||
Will the AI bubble bursting be good or bad for open-weights? What do you think? | 40 | I could see it both ways. On one hand, RAM, GPUs, and SSDs could see their prices return to normal, but on the other hand, it could lead to less AI being developed and released overall, especially from the major tech companies such as Google or Meta. | 2026-01-16T05:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qe7a3m/will_the_ai_bubble_bursting_be_good_or_bad_for/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe7a3m | false | null | t3_1qe7a3m | /r/LocalLLaMA/comments/1qe7a3m/will_the_ai_bubble_bursting_be_good_or_bad_for/ | false | false | self | 40 | null |
Text Transcription - What apps are out there? | 1 | A bit of a shower thought, but:
I was recording a voice memo during one of my classes on my phone this evening, and I ran the audio clip through a hastily vibe-coded tool with whisper-large-v3, with 1 minute chunking with 1s overlap. After processing the transcript was still the sum of its parts. The phone microphone was raw and noisy, and the transcript was too. Countless word errors, and a string of "1 minus 1 minus 1 minus 1 minus" that had to be at least 100 words long.
Yet when I check my IPhone's voice memo app, there's a clean transcript waiting for me. Sure, it still had errors, but it got me wondering.
Is there a simple to use FOSS transcription application that can provide similar transcript quality to Voice Memos on IPhone in a simple .exe or .appimage from crap audio? | 2026-01-16T04:27:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qe68hf/text_transcription_what_apps_are_out_there/ | Qwen30bEnjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe68hf | false | null | t3_1qe68hf | /r/LocalLLaMA/comments/1qe68hf/text_transcription_what_apps_are_out_there/ | false | false | self | 1 | null |
Minimax m2.1 context window limit | 0 | Technical documentation and api says the model is trained and handles 1m+ context, but I couldn’t find any local quantization supporting that. Unsloth, Bartowski, noctrex and all others posted models with a 196k context window limit. So is there an issue here?
Why didn’t we get a 1m+ context local model. | 2026-01-16T04:17:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qe6157/minimax_m21_context_window_limit/ | nash_hkg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe6157 | false | null | t3_1qe6157 | /r/LocalLLaMA/comments/1qe6157/minimax_m21_context_window_limit/ | false | false | self | 0 | null |
Serverless / API use of Qwen3VL embeddings - Where? | 1 | [removed] | 2026-01-16T04:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qe5tk0/serverless_api_use_of_qwen3vl_embeddings_where/ | Qwen30bEnjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe5tk0 | false | null | t3_1qe5tk0 | /r/LocalLLaMA/comments/1qe5tk0/serverless_api_use_of_qwen3vl_embeddings_where/ | false | false | self | 1 | null |
Packet B — adversarial testing for a stateless AI execution gate | 0 |
Packet B — adversarial testing for a stateless AI execution gate
Looking for experienced engineers to try breaking a stateless AI execution gate (Packet B)
I’m looking for a small number of technically serious people to help test a minimal, stateless execution gate for agent systems.
This is not a prompt framework, chatbot loop, or tool wrapper. It’s a kernel-level control primitive designed to answer a narrow question:
Can we deterministically prevent unsafe or stale actions from being executed by an LLM-driven system — even across crashes, retries, and restarts?
The current version (“Packet B”) takes a hard-line stance:
authority is killed on restart
no graceful handover
no hidden state inside the model
fail-closed by default
It has already survived an initial adversarial audit, but the whole point is to find what we’ve missed.
Who this is for
You think in failure modes, not demos
You’re comfortable reading small, security-sensitive code
You’ve built or audited distributed systems, runtimes, kernels, or security primitives
You enjoy breaking assumptions more than polishing abstractions.
Who this is not for
Prompt engineering experiments
“Agent frameworks”
RAG pipelines
General curiosity / learning exercises.
I’m intentionally not posting the code publicly yet.
If this is your kind of problem, DM me and we’ll take it from there.
I’m especially interested in:
replay / rollback attempts
restart edge cases
concurrency or race assumptions
anything that smells like “this works… until it doesn’t”
If you’re the sort of person who enjoys proving systems wrong, you’ll probably enjoy this. | 2026-01-16T04:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qe5qky/packet_b_adversarial_testing_for_a_stateless_ai/ | Agent_invariant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe5qky | false | null | t3_1qe5qky | /r/LocalLLaMA/comments/1qe5qky/packet_b_adversarial_testing_for_a_stateless_ai/ | false | false | self | 0 | null |
Black Forest Labs releases FLUX.2 [klein] | 115 | Black Forest Labs released their new FLUX.2 \[klein\] model
[https://bfl.ai/blog/flux2-klein-towards-interactive-visual-intelligence](https://bfl.ai/blog/flux2-klein-towards-interactive-visual-intelligence)
>FLUX.2 \[klein\]: Towards Interactive Visual Intelligence
>Today, we release the FLUX.2 \[klein\] model family, our fastest image models to date. FLUX.2 \[klein\] unifies generation and editing in a single compact architecture, delivering state-of-the-art quality with end-to-end inference as low as under a second. Built for applications that require real-time image generation without sacrificing quality, and runs on consumer hardware with as little as 13GB VRAM.
>The klein name comes from the German word for "small", reflecting both the compact model size and the minimal latency. But FLUX.2 \[klein\] is anything but limited. These models deliver exceptional performance in text-to-image generation, image editing and multi-reference generation, typically reserved for much larger models.
# What's New
* Sub-second inference. Generate or edit images in under 0.5s on modern hardware.
* Photorealistic outputs and high diversity, especially in the base variants.
* Unified generation and editing. Text-to-image, image editing, and multi-reference support in a single model while delivering frontier performance.
* Runs on consumer GPUs. The 4B model fits in \~13GB VRAM (RTX 3090/4070 and above).
* Developer-friendly & Accessible: Apache 2.0 on 4B models, open weights for 9B models. Full open weights for customization and fine-tuning.
* API and open weights. Production-ready API or run locally with full weights. | 2026-01-16T04:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qe5p2f/black_forest_labs_releases_flux2_klein/ | Old-School8916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe5p2f | false | null | t3_1qe5p2f | /r/LocalLLaMA/comments/1qe5p2f/black_forest_labs_releases_flux2_klein/ | false | false | self | 115 | {'enabled': False, 'images': [{'id': 'hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?width=108&crop=smart&auto=webp&s=a7ffe6cf11cd228ad3b6f3d7ddfbbf402e56ce8c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?width=216&crop=smart&auto=webp&s=44d49cff3821fd2aa78e6c20f91083cc27d6ef97', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?width=320&crop=smart&auto=webp&s=4a3325525b4e12f7d150d41ed575d914f96c4637', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?width=640&crop=smart&auto=webp&s=849cc77e8219a6e01244db5067783e25f38b532a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?width=960&crop=smart&auto=webp&s=49dde3fbf3ac03d9f9bf0359138c2fe965bb5c87', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?width=1080&crop=smart&auto=webp&s=1d6ab1aca08ec308d3c5706949576c0662e06ec7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hUYGS8gVQGBrec5sy6wn4JKDO_ANTT5Jijh3wQiXP70.png?auto=webp&s=8975cd8b24828e4d19342a43e9b9df6718573a85', 'width': 1200}, 'variants': {}}]} |
Looking for feedback on what to run and where | 0 | I’m looking for feedback / sanity checks on how I’m running my local AI setup. This is very much a “yes, I know this is a little unhinged” situation; but I do a lot of software development consulting, architecture reviews, and deep codebase analysis, so these machines are primarily developer copilots / agents, not a production inference fleet.
**Hardware**
Here’s what I’m working with:
* Threadripper workstation
* 4× NVIDIA Blackwell GPUs, 96 GB VRAM each
* GH200 (Grace + Hopper) server
* 3× Strix Halo boxes (128 GB RAM each)
* Mac Studio Ultra (M3) 512 GB unified memory
* Mac Studio Ultra (M1) 128 GB unified memory
**What I’m currently running**
**Threadripper (4× Blackwell)**
* GLM-4.7 FP8 (primary large model, vLLM / SGLang depending on experiment)
**Strix Halos (via llama-swap)**
These act as my control plane + lightweight agents:
* nemotron-3-nano
* glm-4.5-air
* devstral-2-small
* glm-4.6v
* seed-oss
* qwen3-coder-30b
* orchestrator-8b
These handle routing, summarization, smaller workloads.
**Macs**
* M3 Ultra (512 GB):
* minimax-m2.1
* qwen3-coder-480b
* M1 Ultra (128 GB):
* lighter Apple-side experiments / compatibility checks
**What I’m trying to optimize for**
* Latency first (human-in-the-loop coding)
* High-quality developer agents
* Occasional huge-context runs (full repos, long histories)
* Not trying to serve users or chase benchmarks
**The questions**
1. Where would you run GLM-4.7 FP8 in a setup like this? Threadripper vs GH200 vs something else?
2. Does it make sense to keep Minimax-M2.1 / Qwen3-Coder-480B on Apple silicon, or would you move one of them?
3. Am I over-investing in large models when smaller specialists + better routing would get me the same (or better) results?
4. If this were your setup, what would you change? I’m a single software development shop using OpenCode and Claude Code locally.
I’m fully aware this is a borderline crazy system, but it’s intentional; I’m optimizing for insight and velocity in software consulting, not cloud-scale efficiency.
Genuinely curious how others here would structure this, especially folks running multi-node / heterogeneous local labs.
And yes, I have a big solar setup in my home 😄
| 2026-01-16T03:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qe5lwa/looking_for_feedback_on_what_to_run_and_where/ | funding__secured | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe5lwa | false | null | t3_1qe5lwa | /r/LocalLLaMA/comments/1qe5lwa/looking_for_feedback_on_what_to_run_and_where/ | false | false | self | 0 | null |
Unexpected observation: distribution geometry preserved under CPU-only transformations (no quantization) | 0 | I've been running a set of controlled ablations on GPT-2 / tiny-GPT-2
(CPU-only, no CUDA, no dynamic quantization).
All runs are seed-controlled and evaluated on identical prompt sets.
One variant consistently shows:
\- cosine similarity ≈ 0.9999 vs baseline
\- KL divergence comparable to quant+prune
\- consistently lower latency on CPU in my setup
\- no pruning masks, no int8/int4
What’s odd is that the model isn’t “compressed” in the usual sense —
the representation seems to stay intact while the expression changes.
I’m trying to understand why the geometry survives.
Has anyone seen something similar, or knows a framework that explains this behavior? | 2026-01-16T03:36:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qe56cj/unexpected_observation_distribution_geometry/ | Safe-Yellow2951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe56cj | false | null | t3_1qe56cj | /r/LocalLLaMA/comments/1qe56cj/unexpected_observation_distribution_geometry/ | false | false | self | 0 | null |
The math stopped working: Why I moved our RAG stack from OpenAI to on-prem Llama 3 (Quantized) | 3 | We’ve been running a corporate RAG agent for about 8 months. Initially, the OpenAI API bills were negligible ($50/mo). Last month, as adoption scaled to \~400 users, the bill crossed the cost of a VMware renewal.
I ran the numbers on repatriation and found the "Token Tax" is unsustainable for always-on enterprise tools.
The Pivot: We moved the workload to on-prem hardware.
* Model: Llama 3 (70B) - 4-bit Quantization (AWQ).
* Hardware: 2x NVIDIA L40S (48GB VRAM each).
* Inference Engine: vLLM.
* Context Window: 8k (sufficient for our doc retrieval).
The Reality Check: People think you need H100s for this. You don't. The L40S handles the inference load with decent tokens/sec, and the TCO break-even point against GPT-4 Turbo (at our volume) is about 5 months.
I wrote up a detailed breakdown of the thermal density and the specific TCO spreadsheet on my blog (Rack2Cloud) if anyone is fighting this battle with their CFO right now.
*Is anyone else seeing "API fatigue" with clients right now, or are you just eating the OpEx costs?* | 2026-01-16T03:35:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qe55jk/the_math_stopped_working_why_i_moved_our_rag/ | NTCTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe55jk | false | null | t3_1qe55jk | /r/LocalLLaMA/comments/1qe55jk/the_math_stopped_working_why_i_moved_our_rag/ | false | false | self | 3 | null |
Dang, M2 drives are the new DDR5 apparently. | 204 | 2026-01-16T03:18:52 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qe4so5 | false | null | t3_1qe4so5 | /r/LocalLLaMA/comments/1qe4so5/dang_m2_drives_are_the_new_ddr5_apparently/ | false | false | default | 204 | {'enabled': True, 'images': [{'id': 'c8pq1jm6qmdg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?width=108&crop=smart&auto=webp&s=0f8da98a7e758d51532b23acb67114c57052f68b', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?width=216&crop=smart&auto=webp&s=b21cf8536fec51177a2946644c84bbe9c72be078', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?width=320&crop=smart&auto=webp&s=d74cbea2be6210ad29e4c12f88f163d7a462c15d', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?width=640&crop=smart&auto=webp&s=b9e429f423f72a245e2fe28f8b56d773252a3eec', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?width=960&crop=smart&auto=webp&s=aafcc2389e7b6c950fbe7e8bc5a80cb67c977ce0', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?width=1080&crop=smart&auto=webp&s=47c59553c62d26206ff4260c6ef7ec2b3e5687d4', 'width': 1080}], 'source': {'height': 1936, 'url': 'https://preview.redd.it/c8pq1jm6qmdg1.jpeg?auto=webp&s=0e54059927db35e12bf2c5e35a7ac47cf0d56732', 'width': 1936}, 'variants': {}}]} | ||
Opinions on the best coding model for a 3060 (12GB) and 64GB of ram? | 7 | Specs in the title. I have been running GPT-OSS-120B at the published mxfp4. But recently I’ve been hearing good things about e.g. MiniMax-2.1 and GLM-4.7. Much bigger models, but with heavy REAP and quants they could also fit on my machine.
Based on my reading, MiniMax is probably the strongest of the three, but I don’t know if the REAP and quants (probably REAP-40 at q3 is necessary) would degrade it too much? Or maybe there are other models I’m overlooking?
What are other people’s experiences? | 2026-01-16T02:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qe4af5/opinions_on_the_best_coding_model_for_a_3060_12gb/ | eapache | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe4af5 | false | null | t3_1qe4af5 | /r/LocalLLaMA/comments/1qe4af5/opinions_on_the_best_coding_model_for_a_3060_12gb/ | false | false | self | 7 | null |
Step Fun released Step-Audio-R1.1 on HF. The new Frontier of Audio Reasoning! | 9 | HF Link: [https://huggingface.co/collections/stepfun-ai/step-audio-r1](https://huggingface.co/collections/stepfun-ai/step-audio-r1)
Results:
✅96.4% accuracy on BigBench Audio, setting a new record and surpassing Grok, Gemini, OpenAI, and Google models (Fig. 1).
✅1.51s TTFA (Time-to-First-Audio), fast enough to feel like a real conversation
🚀Step-Audio-R1.1 demonstrates that deep reasoning and real-time performance are no longer a trade-off. As shown in Fig. 2, it maintains the highest reasoning depth in class while staying under the critical \~1.5s latency threshold.
What’s under the hood
📌Test-time compute scaling: Step-Audio-R1, the first audio LLM to unlock dynamic compute allocation during inference.
📌End-to-end audio reasoning: coherent, on-the-fly reasoning with zero added latency.
📌Scalable CoT optimized specifically for audio understanding tasks. | 2026-01-16T02:37:15 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qe3v8e | false | null | t3_1qe3v8e | /r/LocalLLaMA/comments/1qe3v8e/step_fun_released_stepaudior11_on_hf_the_new/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': '8x79rd6iimdg1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/8x79rd6iimdg1.jpeg?width=108&crop=smart&auto=webp&s=6806b35d353bcb9ffd2332a2c1e791d308c6a9e6', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/8x79rd6iimdg1.jpeg?width=216&crop=smart&auto=webp&s=e3e5b301b7895f77389abbd43c9b84ec63fa84eb', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/8x79rd6iimdg1.jpeg?width=320&crop=smart&auto=webp&s=1a86899d925f0a6943cdd31400eb61938516d0cf', 'width': 320}, {'height': 501, 'url': 'https://preview.redd.it/8x79rd6iimdg1.jpeg?width=640&crop=smart&auto=webp&s=bc78d237f3fa3ea80ef330f83e92971538554508', 'width': 640}, {'height': 752, 'url': 'https://preview.redd.it/8x79rd6iimdg1.jpeg?width=960&crop=smart&auto=webp&s=5ff5ff5d025879b1326c9c86e1e26bff9a7a1e3f', 'width': 960}], 'source': {'height': 845, 'url': 'https://preview.redd.it/8x79rd6iimdg1.jpeg?auto=webp&s=fcc9401ca0f334d762a490f2fafe4c80660eead9', 'width': 1078}, 'variants': {}}]} | |
[Question on running local Ai models] | 0 | I'm basically asking about a chat bot like ChatGPT
1. Are there any tiny models that use 2-4gb vram or is 8gb the smallest? (I just want something small to test around with, and I have 32gb of Ram as well if thats a factor)
2. Is there a way to add or train the AI on my own data? Wanna see how it would be if I made something highly specialized myself.. 🤔
| 2026-01-16T02:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qe3tm2/question_on_running_local_ai_models/ | NyannoKonekko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe3tm2 | false | null | t3_1qe3tm2 | /r/LocalLLaMA/comments/1qe3tm2/question_on_running_local_ai_models/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.