title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
bench maxxing??
21
https://huggingface.co/inclusionAI/Ring-1T-preview
2025-10-07T22:42:16
https://i.redd.it/mxvgitxnprtf1.png
BoringAd6806
i.redd.it
1970-01-01T00:00:00
0
{}
1o0tpm8
false
null
t3_1o0tpm8
/r/LocalLLaMA/comments/1o0tpm8/bench_maxxing/
false
false
default
21
{'enabled': True, 'images': [{'id': 'mxvgitxnprtf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?width=108&crop=smart&auto=webp&s=f7a1fb299d22157e2e6fb4eaac4b830c10303bf3', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?width=216&crop=smart&auto=webp&s=8c5159be6853f96a119ed05dd16921917cbe4005', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?width=320&crop=smart&auto=webp&s=84ff171f4994d1845fac80574d9612f6d2bbb53c', 'width': 320}, {'height': 324, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?width=640&crop=smart&auto=webp&s=8370c577a018e9a54f7056a5b047373457fa59c7', 'width': 640}, {'height': 487, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?width=960&crop=smart&auto=webp&s=82ad83ff376fb9a6a1ca5799d5235cc60fcec105', 'width': 960}, {'height': 548, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?width=1080&crop=smart&auto=webp&s=3b0bc7f2f674560c4e8c6aa987e0e1d33827014e', 'width': 1080}], 'source': {'height': 1906, 'url': 'https://preview.redd.it/mxvgitxnprtf1.png?auto=webp&s=49c38ab3ee4ce051deecd14544fe69441751b100', 'width': 3754}, 'variants': {}}]}
How to setup Linux environment?
5
I'm setting up a fresh WSL Ubuntu install for local LLM (because my Debian install is a mess). I asked ChatGPT what are some essential software/tools to install and this is what it suggested: Conda/Microconda (I think I want to use UV though) CUDA Toolkit NVIDIA GPU Monitoting (gpustat) Pytorch torchvision torchaudio Tensorflow-gpu vllm llama.cpp What do you think of this list? What other software tools do you think I should install? And for those of you who use UV, does it really help avoid dependency hell? In the short time I tried running llama.cpp using venv/conda on my Debian install, I was wasting a lot of time trying to fix errors with installing dependencies. Once I get a list of the best/most useful software, I want to create a script that automates the installation.
2025-10-07T22:35:47
https://www.reddit.com/r/LocalLLaMA/comments/1o0tk6x/how_to_setup_linux_environment/
Techngro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0tk6x
false
null
t3_1o0tk6x
/r/LocalLLaMA/comments/1o0tk6x/how_to_setup_linux_environment/
false
false
self
5
null
BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2 is possibly just a copy of Qwen's regular Qwen3-Coder-30B-A3B-Instruct
100
This as brought up in [https://huggingface.co/BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2/discussions/1](https://huggingface.co/BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2/discussions/1) Not sure if it's true or not, but one user seems to be convinced by their tests that the models are identicaly. Maybe someone smarter than me can look into this and verify this
2025-10-07T22:05:06
https://www.reddit.com/r/LocalLLaMA/comments/1o0st2o/basedbaseqwen3coder30ba3binstruct480bdistillv2_is/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0st2o
false
null
t3_1o0st2o
/r/LocalLLaMA/comments/1o0st2o/basedbaseqwen3coder30ba3binstruct480bdistillv2_is/
false
false
self
100
{'enabled': False, 'images': [{'id': '6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?width=108&crop=smart&auto=webp&s=e3c61f66f4cc8eff309951198fd9a652912e697a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?width=216&crop=smart&auto=webp&s=129076629daee45103df67bc190507f26e052879', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?width=320&crop=smart&auto=webp&s=e509b7004d0740178d0ff0d104da59543b24bca2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?width=640&crop=smart&auto=webp&s=8d28fddcd37418caffb64d963042d4c5ae17f110', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?width=960&crop=smart&auto=webp&s=0f0b714dd4a916fefd5b2aafd681fb2f779bdf69', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?width=1080&crop=smart&auto=webp&s=a8663253026067d21b040ced4a39d85eac48ebc4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6c6jDrM-GkpgYwO5xiHyIaviSMUci7wXZRzlxcjNQcc.png?auto=webp&s=0cd5a5c9bdaa29bd85f77a685907688c3e491e40', 'width': 1200}, 'variants': {}}]}
A 5-minute, no-BS way to pick a local model for your real task
4
**Hey fam**, I've been searching through posts on **how to pick a local model**, and I found lots of good posts emphasizing the fact that **it's highly unreliable for a universal benchmark**, and the **best way is to test local AI's with your own real usecases**. * [Where do I go to see benchmark comparisons of local models?](https://www.reddit.com/r/LocalLLaMA/comments/1mzejbm/where_do_i_go_to_see_benchmark_comparisons_of/) * [TTS Model Comparisons: My Personal Rankings (So far) of TTS Models](https://www.reddit.com/r/LocalLLaMA/comments/1mfjn88/tts_model_comparisons_my_personal_rankings_so_far/) * [🎧 Listen and Compare 12 Open-Source Text-to-Speech Models (Hugging Face Space)](https://www.reddit.com/r/LocalLLaMA/comments/1ltbrlf/listen_and_compare_12_opensource_texttospeech/) I want to share **my current way of picking a model in 5-10 mins**. Feel free to comment on your own usecases to testout, and would be awesome to have some feedbacks and model recommendations! **TLDR:** >**Goal:** help anyone quickly find a “good enough” local model for *their* workflow—without randomly chasing leaderboards. **My task:** private resume screening (50+ pages PDF) with inline citations. (I'm using a public resume book as an example) **Stack:** MacBook Air M2 (16GB) + Hyperlink as the local RAG runner (swap models for trials). **What to expect:** **- 5-minute model testing strategy** **-** My model recommendation for common doc-QA task (this might vary based on usecases) **Fileset & prompt:** * **Fileset:** Princeton Resume Book (public accessible) * **Prompt:** Who are most qualified candidate for IB at top-tier banks and why? https://reddit.com/link/1o0ssd2/video/r64ocg3wirtf1/player **5-minute protocol (once per model)** 1. **Connect files** into Hyperlink local file agent. 2. **Pick model** (remember to check the box for compatibility with your PC specs). 3. **Hit run** and observe. 4. **Verify citations**: do quotes match the page/line? **Ranked models with take aways** (fit 16GB & commonly used) 1. **cogito-preview-llama-3B-4bit** \- clear logic (eval criteria -> suggestions -> conclusion) 2. **granite-3.3-2B-Instruct-4bit** \- quick clean results, more criteria elaboration would be better 3. **Llama-3.2-3B-Instruct-4bit** \- straight to the point + but less citations (bad) **What mattered** (my priorities for the resume task) 1. **Citations > vibes.** If I can’t click file pages and see the proof, it’s a miss and I'll drop the model. 2. **Small models are good enough for my workflow.** 2–3B models were surprisingly competitive. 3. **Latency is real.** Sub-20s feels “usable”; slower than 40s makes me switch. **Caveats** * I can actually stress test with like 10,000 files indexed as my project scope which is pretty dope * Result favors **doc QA** with long PDFs; chatty coding or reasoning tasks will rank differently * Privacy note: public files here; for real resumes I keep everything local. **What's next?** I'll be sharing some more of my workflow testouts soon, especially with cloud-local AI collaboration in future posts. Happy to learn how other folks are using local AIs and suggestions for their **model & use-case** \+ takeways/recommendations (and a public fileset if possible).)
2025-10-07T22:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1o0ssd2/a_5minute_nobs_way_to_pick_a_local_model_for_your/
Zealousideal-Fox-76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ssd2
false
null
t3_1o0ssd2
/r/LocalLLaMA/comments/1o0ssd2/a_5minute_nobs_way_to_pick_a_local_model_for_your/
false
false
self
4
null
Thinking of text-to-image models
10
So, while I wait for MaxSun to release their B60 Turbo card (I plan to buy two), I am learning about kv-cache, quantization and alike and crawling the vLLM docs to learn what the best parameters are to set when using it as a backend for LocalAI, which I plan to use as my primary inference server. One of the most-used features for me in ChatGPT that I want to have at home is image generation. It does not need to be great, it just needs to be "good". Reason for that is that I often feed reference images and text to ChatGPT to draw certain details of characters that I have difficulty imagening - I am visually impaired, and whilst my imagination is solid, having a bit of visual stuff to go along is really helpful to have. The primary model I will run is Qwen3 32B Q8 with a similaririly quant'ed kv-cache, whereas the latter is largely offloaded to host memory (thinking of 512GB - Epyc 9334, so DDR5). Qwen3 should run "fast" (high-ish t/s - I am targeting around 15, circa). But on the side, loaded on demand, I want to be able to generate images. Paralellism for that configuration will be set to one - I only need one instance and one inference of a text-to-image model at a time. I looked at FLUX, HiDream, a demo of HunyanImage-3.0 and NanoBanana and I like the latter two's output quite a lot. So something like this would be nice to host locally, even if not as good as those. What are the "state of the art" locally runnable text-to-image models? I am targeting a Supermicro H13SSL-N motherboard, if I plug the B60s in the lower two x16 slots, I technically have another left for a 2-slot x16 card, where I might plop a cheaper, lower power card just for "other models" in the future, where speed does not matter too much (perhaps the AMD AI Pro R9700 - seems it'd fit). If the model happened to also be text+image-to-image, that'd be really useful. Unfortunately, ComfyUI kinda breaks me (too many lines, completely defeats my vision...) so I would have to use a template here if needed. Thank you and kind regards!
2025-10-07T21:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1o0rjxl/thinking_of_texttoimage_models/
IngwiePhoenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0rjxl
false
null
t3_1o0rjxl
/r/LocalLLaMA/comments/1o0rjxl/thinking_of_texttoimage_models/
false
false
self
10
null
128GB VRAM Model for 8xA4000?
2
I have repurposed 8x Quadro A4000 in one server at work, so 8x16=128GB of VRAM. What would be useful to run on it. It looks like there are models for 24GB of 4090 and then nothing before you need 160GB+ of VRAM. Any suggestions? I didn't play with Cursor or other coding tools, so that would be useful also to test.
2025-10-07T21:02:27
https://www.reddit.com/r/LocalLLaMA/comments/1o0r6dc/128gb_vram_model_for_8xa4000/
t3chguy1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0r6dc
false
null
t3_1o0r6dc
/r/LocalLLaMA/comments/1o0r6dc/128gb_vram_model_for_8xa4000/
false
false
self
2
null
Samsung Paper Reveals a Recursive Technique that Beats Gemini 2.5 Pro on ARC-AGI with 0.01% of the Parameters!
129
2025-10-07T20:13:48
https://arxiv.org/abs/2510.04871
abdouhlili
arxiv.org
1970-01-01T00:00:00
0
{}
1o0puzj
false
null
t3_1o0puzj
/r/LocalLLaMA/comments/1o0puzj/samsung_paper_reveals_a_recursive_technique_that/
false
false
default
129
null
Looking to self host translation service
1
Looking for options to translate WordPress content into an many languages as possible. Quality will be much more important than speed. It looks like no language left behind by meta will be a good choice but I was wondering if there were better, newer models. I see many options but I wouldn't know how to even check if the are accurate.
2025-10-07T20:12:06
https://www.reddit.com/r/LocalLLaMA/comments/1o0ptds/looking_to_self_host_translation_service/
superdav42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ptds
false
null
t3_1o0ptds
/r/LocalLLaMA/comments/1o0ptds/looking_to_self_host_translation_service/
false
false
self
1
null
AMD radeon pro v710
1
Why isn’t this GPU a popular choice for inference? https://www.techpowerup.com/gpu-specs/radeon-pro-v710.c4234
2025-10-07T20:05:27
https://www.reddit.com/r/LocalLLaMA/comments/1o0pmza/amd_radeon_pro_v710/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0pmza
false
null
t3_1o0pmza
/r/LocalLLaMA/comments/1o0pmza/amd_radeon_pro_v710/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ctk3Ghh2AqrkJ3cruF_BUtL-snQPFwk-XZuFu-X2AuM', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/Ctk3Ghh2AqrkJ3cruF_BUtL-snQPFwk-XZuFu-X2AuM.jpeg?width=108&crop=smart&auto=webp&s=17640ac31e26322369600f6e938e3a7a35fc4870', 'width': 108}, {'height': 96, 'url': 'https://external-preview.redd.it/Ctk3Ghh2AqrkJ3cruF_BUtL-snQPFwk-XZuFu-X2AuM.jpeg?width=216&crop=smart&auto=webp&s=be26586dd5fb34e95594d1e0b0fe7ce8ee6fb8d5', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/Ctk3Ghh2AqrkJ3cruF_BUtL-snQPFwk-XZuFu-X2AuM.jpeg?width=320&crop=smart&auto=webp&s=2795a56c2b4c423289ecf426b7f6445b10d3dc2a', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/Ctk3Ghh2AqrkJ3cruF_BUtL-snQPFwk-XZuFu-X2AuM.jpeg?auto=webp&s=ccb907593d60be01710f592eb2a2000ee0a49928', 'width': 607}, 'variants': {}}]}
Granite Docling WebGPU: State-of-the-art document parsing 100% locally in your browser.
579
IBM recently released Granite Docling, a 258M parameter VLM engineered for efficient document conversion. So, I decided to build a demo which showcases the model running entirely in your browser with WebGPU acceleration. Since the model runs locally, no data is sent to a server (perfect for private and sensitive documents). As always, the demo is available and open source on Hugging Face: [https://huggingface.co/spaces/ibm-granite/granite-docling-258M-WebGPU](https://huggingface.co/spaces/ibm-granite/granite-docling-258M-WebGPU) Hope you like it!
2025-10-07T20:00:14
https://v.redd.it/33mh4fzdwqtf1
xenovatech
/r/LocalLLaMA/comments/1o0php3/granite_docling_webgpu_stateoftheart_document/
1970-01-01T00:00:00
0
{}
1o0php3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/33mh4fzdwqtf1/DASHPlaylist.mpd?a=1762588819%2CYTQ0NTkyNmQzYWJiZWZmMWI1ZTQyNmY0OGU5MzczMzNmYjVkNWM3ZTZlNjIyZGM3ZTBlZmU4MmE1ZmZmYzNhMw%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/33mh4fzdwqtf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/33mh4fzdwqtf1/HLSPlaylist.m3u8?a=1762588819%2CZDVmYjk3Mzg3NzhmZDE2MDU3YzFiOWY5ODQ4ZmFlYjg5YzA3MjBiNTBhNTBlYjc5Njg5ZmU2M2E0N2UxNjg2MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/33mh4fzdwqtf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1574}}
t3_1o0php3
/r/LocalLLaMA/comments/1o0php3/granite_docling_webgpu_stateoftheart_document/
false
false
https://external-preview…aee1566d0ca624b5
579
{'enabled': False, 'images': [{'id': 'bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?width=108&crop=smart&format=pjpg&auto=webp&s=0cb990af25d26e15400f7097c56f987b94ab5b62', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?width=216&crop=smart&format=pjpg&auto=webp&s=cbd3058205024ca52386017a90b2cc164cbcd8b3', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?width=320&crop=smart&format=pjpg&auto=webp&s=9f29efa9302992fa49966c494a86df4c642f1411', 'width': 320}, {'height': 439, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?width=640&crop=smart&format=pjpg&auto=webp&s=ab69578cb7d3fe72845a2047fd96862947ce6949', 'width': 640}, {'height': 658, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?width=960&crop=smart&format=pjpg&auto=webp&s=d0ba8e6e3917110788765938f2a64a2d0e7cacbb', 'width': 960}, {'height': 741, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0b7250ea113fe2f6ede4a75e067d282f0c674fa2', 'width': 1080}], 'source': {'height': 1714, 'url': 'https://external-preview.redd.it/bXZwenNmemR3cXRmMQIkfIP27ngHfIf2o9FEvt2htapLOK3sF-ey3U1M3aWC.png?format=pjpg&auto=webp&s=2570b06e97af616e1c2927d959dd2c1e705c7ae4', 'width': 2498}, 'variants': {}}]}
Qwen3 switches to only numbers when generating responses.
1
I'm using Qwen3 32B from unsloth [https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF](https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF) I downloaded this model via LM Studio. What might be the reason of this?
2025-10-07T19:53:08
https://i.redd.it/rb4usp57vqtf1.png
Wonsz170
i.redd.it
1970-01-01T00:00:00
0
{}
1o0paqo
false
null
t3_1o0paqo
/r/LocalLLaMA/comments/1o0paqo/qwen3_switches_to_only_numbers_when_generating/
false
false
default
1
{'enabled': True, 'images': [{'id': 'rb4usp57vqtf1', 'resolutions': [{'height': 13, 'url': 'https://preview.redd.it/rb4usp57vqtf1.png?width=108&crop=smart&auto=webp&s=39b7f7e08ae6ee938a88ea8623a02dde39db0701', 'width': 108}, {'height': 26, 'url': 'https://preview.redd.it/rb4usp57vqtf1.png?width=216&crop=smart&auto=webp&s=900ee78764ce3834f09362d8f16c15d8807d8a67', 'width': 216}, {'height': 38, 'url': 'https://preview.redd.it/rb4usp57vqtf1.png?width=320&crop=smart&auto=webp&s=4b124edaa8afba07f140cc7d166f5d24460bfa5b', 'width': 320}, {'height': 77, 'url': 'https://preview.redd.it/rb4usp57vqtf1.png?width=640&crop=smart&auto=webp&s=6d04e0efdc4a1bac363f42dbb4da63c69ef54dfe', 'width': 640}], 'source': {'height': 113, 'url': 'https://preview.redd.it/rb4usp57vqtf1.png?auto=webp&s=d5cd16f0cb879779d46b5df4f67fee50129015fb', 'width': 935}, 'variants': {}}]}
We open-sourced Echo Mode — a middleware that keeps your LLMs’ tone stable across long conversations
1
[removed]
2025-10-07T19:49:08
https://www.reddit.com/r/LocalLLaMA/comments/1o0p6v4/we_opensourced_echo_mode_a_middleware_that_keeps/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0p6v4
false
null
t3_1o0p6v4
/r/LocalLLaMA/comments/1o0p6v4/we_opensourced_echo_mode_a_middleware_that_keeps/
false
false
self
1
null
We open-sourced Echo Mode — a middleware that keeps your LLMs’ tone stable across long conversations
1
[removed]
2025-10-07T19:47:34
https://www.reddit.com/r/LocalLLaMA/comments/1o0p5fg/we_opensourced_echo_mode_a_middleware_that_keeps/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0p5fg
false
null
t3_1o0p5fg
/r/LocalLLaMA/comments/1o0p5fg/we_opensourced_echo_mode_a_middleware_that_keeps/
false
false
self
1
null
We open-sourced Echo Mode — a middleware that keeps your LLMs’ tone stable across long conversations
1
[removed]
2025-10-07T19:46:55
https://www.reddit.com/r/LocalLLaMA/comments/1o0p4t4/we_opensourced_echo_mode_a_middleware_that_keeps/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0p4t4
false
null
t3_1o0p4t4
/r/LocalLLaMA/comments/1o0p4t4/we_opensourced_echo_mode_a_middleware_that_keeps/
true
false
self
1
null
We open-sourced Echo Mode — a middleware that keeps your LLMs’ tone stable across long conversations
1
[removed]
2025-10-07T19:44:33
https://www.reddit.com/r/LocalLLaMA/comments/1o0p2ho/we_opensourced_echo_mode_a_middleware_that_keeps/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0p2ho
false
null
t3_1o0p2ho
/r/LocalLLaMA/comments/1o0p2ho/we_opensourced_echo_mode_a_middleware_that_keeps/
false
false
self
1
null
We open-sourced Echo Mode — a middleware that keeps your LLMs’ tone stable across long conversations
1
[removed]
2025-10-07T19:43:59
https://www.reddit.com/r/LocalLLaMA/comments/1o0p1xv/we_opensourced_echo_mode_a_middleware_that_keeps/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0p1xv
false
null
t3_1o0p1xv
/r/LocalLLaMA/comments/1o0p1xv/we_opensourced_echo_mode_a_middleware_that_keeps/
false
false
self
1
null
We open-sourced Echo Mode — a middleware that keeps your LLMs’ tone stable across long conversations
1
[removed]
2025-10-07T19:43:27
https://www.reddit.com/r/LocalLLaMA/comments/1o0p1f7/we_opensourced_echo_mode_a_middleware_that_keeps/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0p1f7
false
null
t3_1o0p1f7
/r/LocalLLaMA/comments/1o0p1f7/we_opensourced_echo_mode_a_middleware_that_keeps/
false
false
self
1
null
How much does 1T tokens cost? How much did all these amazing people spent on OpenAI tokens?
56
I did some math as a follow-up to OpenAI’s Dev Day yesterday and decided to share it here. Assuming GPT-5 with a 4:1 input:output token ratio, 1T tokens means 800,000 million input tokens at $1.25 per million, which is $1,000,000, plus 200,000 million output tokens at $10 per million, adding $2,000,000, for a total of $3,000,000 for 1T tokens. On this photo, 30 people consumed 1T tokens, 70 people 100B tokens, and 54 people 10B tokens, totaling $112,620,000, which is roughly 3% of OpenAI’s total $3.7 billion revenue in 2024. Curious - is it even possible to process this amount of tokens using local models? What would be the cost in GPUs and residential electricity? 🧐⚡️
2025-10-07T19:32:48
https://x.com/abylayo/status/1975546166113669170?s=46
aospan
x.com
1970-01-01T00:00:00
0
{}
1o0or4w
false
null
t3_1o0or4w
/r/LocalLLaMA/comments/1o0or4w/how_much_does_1t_tokens_cost_how_much_did_all/
false
false
default
56
null
SFF 70W GPUs: Intel Arc Pro B50 vs NVIDIA RTX Pro 4000 SFF
2
Considering purchasing a GPU for my SFF PC to use for local LLMs with Home Assistant Voice Assistant and Ollama on Linux. My goal is low latency for a voice assistant for general knowledge and tool calling. Right now I use Gemma3n:e4b (CPU only) without tool calling, but, in general, I would like to use bigger models. To upgrade my current PC, I would need a GPU that can be powered by PCIe at approximately 75W. Would you recommend an Intel Arc Pro B50 at $350 or waiting for an NVIDIA RTX Pro 4000 SFF at $1500 or staring over with a new standard size PC? I've looked for a used RTX 4000 Ada SFF and a used RTX 2000 Ada SFF but selection was limited. Is the NVIDA solution overkill? Is there any worry that the Intel Arc GPU would loose support with Ollama in the future? Right now, I don't think Arc is centrally supported. Intel Arc Pro B50 * 16GB GDDR6 * 70W TDP * 224 GB/s * 170 TOPs at INT8 * $349 NVIDIA RTX Pro 4000 Blackwell SFF * 24GB GDDR7 (ECC) * 70W TDP * 432 GB/s * 770 TOPs at FP4 * Est $1500
2025-10-07T19:30:01
https://www.reddit.com/r/LocalLLaMA/comments/1o0oogc/sff_70w_gpus_intel_arc_pro_b50_vs_nvidia_rtx_pro/
InternationalNebula7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0oogc
false
null
t3_1o0oogc
/r/LocalLLaMA/comments/1o0oogc/sff_70w_gpus_intel_arc_pro_b50_vs_nvidia_rtx_pro/
false
false
self
2
null
How much does 1T tokens cost? How much did all these amazing people spent on OpenAI tokens? (spoiler: $112,620,000).
3
I did some math as a follow-up to OpenAI’s Dev Day yesterday and decided to share it here. Assuming GPT-5 with a 4:1 input:output token ratio, 1T tokens means 800,000 million input tokens at $1.25 per million, which is $1,000,000, plus 200,000 million output tokens at $10 per million, adding $2,000,000, for a total of $3,000,000 for 1T tokens. On this photo, 30 people consumed 1T tokens, 70 people 100B tokens, and 54 people 10B tokens, totaling $112,620,000, which is roughly 3% of OpenAI’s total $3.7 billion revenue in 2024. Curious - is it even possible to process this amount of tokens using local models? What would be the cost in GPUs and residential electricity? 🧐⚡️
2025-10-07T19:26:21
https://x.com/abylayo/status/1975546166113669170?s=46
aospan
x.com
1970-01-01T00:00:00
0
{}
1o0okur
false
null
t3_1o0okur
/r/LocalLLaMA/comments/1o0okur/how_much_does_1t_tokens_cost_how_much_did_all/
true
false
spoiler
3
null
$15k to throwaway for a self-hosted Ilm. What would you guys recommend hardware wise for wanting to run a model like perplexica?
7
I’m not really hardware expert and would like to optimize and was hoping for input.
2025-10-07T19:17:43
https://www.reddit.com/r/LocalLLaMA/comments/1o0ocex/15k_to_throwaway_for_a_selfhosted_ilm_what_would/
gacimba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ocex
false
null
t3_1o0ocex
/r/LocalLLaMA/comments/1o0ocex/15k_to_throwaway_for_a_selfhosted_ilm_what_would/
false
false
self
7
null
This is how much the Apple models are behind
0
2025-10-07T18:54:38
https://i.redd.it/wrrltw92lqtf1.jpeg
narcomo
i.redd.it
1970-01-01T00:00:00
0
{}
1o0nphz
false
null
t3_1o0nphz
/r/LocalLLaMA/comments/1o0nphz/this_is_how_much_the_apple_models_are_behind/
false
false
https://b.thumbs.redditm…zJY7wDetXJxA.jpg
0
{'enabled': True, 'images': [{'id': 'YFf53h1kM5nh6g2jvZCRG1OZmLeWoF-tISYFK6OPwLQ', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?width=108&crop=smart&auto=webp&s=004aa91dac2a7d1e25e80279d52307e7bcd7d25c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?width=216&crop=smart&auto=webp&s=df99f45a7c1d3b141e394d83a4d4524ddc6de459', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?width=320&crop=smart&auto=webp&s=320e060c47a22b377de78e8ae6e40cc3bfa8ab64', 'width': 320}, {'height': 361, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?width=640&crop=smart&auto=webp&s=e1ca524a020a817ba5e4a67ca26154a35556219f', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?width=960&crop=smart&auto=webp&s=e449d8c09f86432bef302b655d73da8eb19b0e35', 'width': 960}, {'height': 609, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?width=1080&crop=smart&auto=webp&s=0b84df8c57243c859d9bef5ee22fd0b802e185b9', 'width': 1080}], 'source': {'height': 728, 'url': 'https://preview.redd.it/wrrltw92lqtf1.jpeg?auto=webp&s=4a8258f22c8b7dd474e87227969121a190602335', 'width': 1290}, 'variants': {}}]}
Introducing SIM-CoT-GPT2-CODI: A LoRA-Fine-Tuned 346M Parameter Implicit Reasoning Model Leveraging Supervised Latent Space Stabilization via Auxiliary Decoder Alignment for 2.3x Token Efficiency Gains Over Explicit Chain-of-Thought on GSM8K and MultiArith Benchmarks
19
https://huggingface.co/internlm/SIM_COT-GPT2-CODI
2025-10-07T18:45:10
https://www.reddit.com/r/LocalLLaMA/comments/1o0nfmi/introducing_simcotgpt2codi_a_lorafinetuned_346m/
ArchdukeofHyperbole
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0nfmi
false
null
t3_1o0nfmi
/r/LocalLLaMA/comments/1o0nfmi/introducing_simcotgpt2codi_a_lorafinetuned_346m/
false
false
self
19
{'enabled': False, 'images': [{'id': 'VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?width=108&crop=smart&auto=webp&s=56954b25b9d73cfba6c8398070dc9f7a97c1ac79', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?width=216&crop=smart&auto=webp&s=a786ebccc615a3a81640196bc6366a7192305236', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?width=320&crop=smart&auto=webp&s=ee79b27e9f2dba872d5fe68e1664d8c5ee716580', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?width=640&crop=smart&auto=webp&s=2cc5ec3ae9b1339dc9ed1b111659ec6bae30f622', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?width=960&crop=smart&auto=webp&s=bc25c15e59bcdb821179cc69875fa662a8f36fea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?width=1080&crop=smart&auto=webp&s=3468c73dc90fb37343911dee7107618c2fee8770', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VXIqRtK9cE3c8hDDzqWKWV7ZB-wF4z6lkm81v2Yu9uE.png?auto=webp&s=744fd0272fe5cc8d3e1e68370bca8f5a3facaa91', 'width': 1200}, 'variants': {}}]}
best video editing models?
0
I'm trying to aggregate APIs for the best video editing models I can find (cost isn't any issue) -- would appreciate any recs if people have them!
2025-10-07T18:40:37
https://www.reddit.com/r/LocalLLaMA/comments/1o0nb62/best_video_editing_models/
Helpful_Jacket8953
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0nb62
false
null
t3_1o0nb62
/r/LocalLLaMA/comments/1o0nb62/best_video_editing_models/
false
false
self
0
null
2 month MiniPC mini-review: Minisforum AI X1 Pro (AMD HX 370)
22
tl;dr: it's the AI Max 395+'s little brother. Half the price, but not a serious AI workstation.
2025-10-07T18:30:41
https://ivoras.substack.com/p/2-month-minipc-mini-review-minisforum
ivoras
ivoras.substack.com
1970-01-01T00:00:00
0
{}
1o0n17o
false
null
t3_1o0n17o
/r/LocalLLaMA/comments/1o0n17o/2_month_minipc_minireview_minisforum_ai_x1_pro/
false
false
default
22
{'enabled': False, 'images': [{'id': '9caqHyt_1L7Xsp4TpYFjM4oPMVmVIW9sJF7IMxoOn8I', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/9caqHyt_1L7Xsp4TpYFjM4oPMVmVIW9sJF7IMxoOn8I.jpeg?width=108&crop=smart&auto=webp&s=35742ac79a2579d08a24bd9d27c0cadb309b0847', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/9caqHyt_1L7Xsp4TpYFjM4oPMVmVIW9sJF7IMxoOn8I.jpeg?width=216&crop=smart&auto=webp&s=8fd0c59260b5574be0169d557e40e6bfb3d914e8', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/9caqHyt_1L7Xsp4TpYFjM4oPMVmVIW9sJF7IMxoOn8I.jpeg?width=320&crop=smart&auto=webp&s=a78dfd017145cb54d5d1bf3ff16da49935fcf92a', 'width': 320}], 'source': {'height': 380, 'url': 'https://external-preview.redd.it/9caqHyt_1L7Xsp4TpYFjM4oPMVmVIW9sJF7IMxoOn8I.jpeg?auto=webp&s=67dfa724ad5cadd226779aa3a370ede9eb961df1', 'width': 480}, 'variants': {}}]}
For MAC LLM Prompt processing speeds Gemma 3 seems like an ideal LLM
4
I've been looking for solutions on this issue for a while now with MAC, MLX and unified memory. The prompt processing speed. It is like everyone one else says; simply put, not practical for turn based conversations. What you see instantly with checkpoints like QWEN3 30B INS in 8bit or 4bit MLX quants is instant speed token generation, but as the conversation grows the prompt processing times are significant. For example on a 100K context window the Qwen 3 MOE A3B 30B takes about 3-5 minutes of processing time depending on your context type. And that is a LOT and not practical. So enter GEMMA 3 12B GGUF (llama.cpp) Q8. I've tested this model (Not MLX) and noticed that although its tokens per second might not be a match with the MLX variant, it makes up a whole lot more with prompt processing times. My test using this model with "flash attention (experimental)" on on LM studio on a 100K context window has been stellar. Initial prompt processing 1-3 minutes and subsequent prompts take about 15-30 seconds roughly the same amount of time the GEMINI 2.5 flash takes to process. This tells me that enterprise grade prompt processing times on MAC is not just possible, but its already here and proven in a model as dense as 12B which is vision capable and surprisingly the solution seems to be the llama.cpp framework and not MLX. I've tried other gguf quants with other models with flash attention, none gave me the same results as this one. If someone with actual technical understanding can understand what makes this particular 12B architecture almost instant, then I truly see MACs competing with Nvidia in daily use cases.
2025-10-07T18:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1o0mph2/for_mac_llm_prompt_processing_speeds_gemma_3/
supermazdoor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0mph2
false
null
t3_1o0mph2
/r/LocalLLaMA/comments/1o0mph2/for_mac_llm_prompt_processing_speeds_gemma_3/
false
false
self
4
null
What needs to change to make LLMs more efficient?
0
LLMs are great in a lot of ways, and they are showing signs of improvement. I also think they're incredibly inefficient when it comes to resource consumption because they use up *far* too much of everything: - Too much heat generated. - Too much power consumed. - Too much storage space used up. - Too much RAM to fall back on. - Too much VRAM to load and run them. - Too many calculations when processing input. - Too much money to train them (mostly). Most of these problems require solutions in the form of expensive hardware upgrades. Its a miracle we can even run them at all locally, and my hats off to those who can run decent-quality models on mobile. It almost feels like those room-sized computers many decades ago that used up that much space to run simple commands at a painstakingly slow pace. There's just something about frontier models that, although they are a huge leap from what we had a few years ago, still feel like they use up a lot more resources than they should. Do you think we might reach a watershed moment, like computers did with transistors, integrated circuits and microprocessors back then, that would make it exponentially cheaper to run the models locally? Or are we reaching a wall with modern LLMs/LMMs that require a fundamentally different solution?
2025-10-07T18:08:18
https://www.reddit.com/r/LocalLLaMA/comments/1o0me3x/what_needs_to_change_to_make_llms_more_efficient/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0me3x
false
null
t3_1o0me3x
/r/LocalLLaMA/comments/1o0me3x/what_needs_to_change_to_make_llms_more_efficient/
false
false
self
0
null
Minimum specs to fine-tune 27b parameter model
3
Hi.. in new to running local LLMs . I have 5070ti and I have successfully finetuned 3b parameter model. I want to know minimum gpu specs required to perform some fine-tuning 27b parameter model on gpu to see if I can afford it (with and without quantization)
2025-10-07T17:59:16
https://www.reddit.com/r/LocalLLaMA/comments/1o0m4vh/minimum_specs_to_finetune_27b_parameter_model/
kalyankd03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0m4vh
false
null
t3_1o0m4vh
/r/LocalLLaMA/comments/1o0m4vh/minimum_specs_to_finetune_27b_parameter_model/
false
false
self
3
null
Is it possible to add new characters in Kokoro TTS?
4
Hi everyone, I wanna know if there is way to add new characters in Kokoro Or there will be any future updates expected in this model? I have been using Kokoro for quite a while now. Although its voice are Good but not suitable for all type of narration. I have tried searching different tts models that are resource demanding, which I don't have.I am running kokoro on cpu only at the moment. If you know something very similar in the same range. Please share I would appreciate that.
2025-10-07T17:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1o0lygb/is_it_possible_to_add_new_characters_in_kokoro_tts/
Muzamilkhan7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0lygb
false
null
t3_1o0lygb
/r/LocalLLaMA/comments/1o0lygb/is_it_possible_to_add_new_characters_in_kokoro_tts/
false
false
self
4
null
Fan shroud for AMD MI50
54
Hi, since the AMD MI50 is the cheapest graphic card with 32GB VRAM you can get at the moment, I bought 3 of them. In order to make them fit better in my case, I designed a new shroud for the card which integrates a blower fan. You can find it here: https://www.printables.com/model/1421067-amd-instinct-mi50-shroud
2025-10-07T17:52:35
https://www.reddit.com/r/LocalLLaMA/comments/1o0ly7m/fan_shroud_for_amd_mi50/
Bit_Matter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ly7m
false
null
t3_1o0ly7m
/r/LocalLLaMA/comments/1o0ly7m/fan_shroud_for_amd_mi50/
false
false
self
54
{'enabled': False, 'images': [{'id': '4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?width=108&crop=smart&auto=webp&s=a9b3d3a6649b7e13cf5569ca6a9635b44602d766', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?width=216&crop=smart&auto=webp&s=af77e0ce508a3ccf1d5e5f20e4d4001de1b5d7e4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?width=320&crop=smart&auto=webp&s=f4743ffb43675f87036c09fe727931319832e309', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?width=640&crop=smart&auto=webp&s=c3d39a9ecf0ecf176436a3fa90c575df3632e66a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?width=960&crop=smart&auto=webp&s=003942c58097a401fd35412224744c669d041073', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?width=1080&crop=smart&auto=webp&s=a29d553b329a89abb8d597b9edf10af74cababd2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/4D_pAwN3gJeiBHNxYWmWEeKj66n_dCn5WnWrlEoC55g.jpeg?auto=webp&s=53047e5d5d73664e757fc35181eb3557851d62ad', 'width': 1200}, 'variants': {}}]}
Upload images dataset on HuggingFace
1
Can anyone just tell me how to structure the image dataset and push it on HuggingFace in parquet format. Because I am struggling from 2 days 😭😭😭 to just upload my image dataset on HuggingFace in proper manner. As it should show the images and label column in the dataset card.
2025-10-07T17:47:08
https://www.reddit.com/r/LocalLLaMA/comments/1o0lsml/upload_images_dataset_on_huggingface/
Old-Raspberry-3266
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0lsml
false
null
t3_1o0lsml
/r/LocalLLaMA/comments/1o0lsml/upload_images_dataset_on_huggingface/
false
false
self
1
null
Why Doesn’t MCP Have a Capability Discovery Standard Yet?
1
[removed]
2025-10-07T17:37:55
https://www.reddit.com/r/LocalLLaMA/comments/1o0ljan/why_doesnt_mcp_have_a_capability_discovery/
Ok-Blacksmith-1328
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ljan
false
null
t3_1o0ljan
/r/LocalLLaMA/comments/1o0ljan/why_doesnt_mcp_have_a_capability_discovery/
false
false
self
1
null
Would it make sense to train a model on Roo Code/Cline?
1
I rember back in the day there was a finetune of the first Deepseek Coder models on Roo Code/Cline datasets. I was wondering if it makes sense these days to collect a dataset of Roo Coder/Cline interactions with a SOTA model like GPT 5 or Sonner 4.5 and train something like GLM 4.6 Air (when it comes out) to bring it to that kind of level or close?
2025-10-07T17:33:03
https://www.reddit.com/r/LocalLLaMA/comments/1o0leb4/would_it_make_sense_to_train_a_model_on_roo/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0leb4
false
null
t3_1o0leb4
/r/LocalLLaMA/comments/1o0leb4/would_it_make_sense_to_train_a_model_on_roo/
false
false
self
1
null
Is there a note-taking app that uses AI and voice commands?
2
sorry to directly ask for it, but i didn't see any note-taking app that advertises this kind of features : * Managing (CRUD) notes via voice commands * Checking tasks via voice commands, assigning people to said, sending emails * having both mobile + desktop clients * being self-hostable seeing the current open source LLMs, this shouldn't be an impossible task. what do you think ?
2025-10-07T17:19:54
https://www.reddit.com/r/LocalLLaMA/comments/1o0l11s/is_there_a_notetaking_app_that_uses_ai_and_voice/
UniqueAttourney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0l11s
false
null
t3_1o0l11s
/r/LocalLLaMA/comments/1o0l11s/is_there_a_notetaking_app_that_uses_ai_and_voice/
false
false
self
2
null
Best practices for building production-level chatbots/AI agents (memory, model switching, stack choice)?
1
Hey folks, I’d like to get advice from senior devs who’ve actually shipped production chatbots / AI agents — especially ones doing things like web search, sales bots, or custom conversational assistants. I’ve been exploring LangChain, LangGraph, and other orchestration frameworks, but I want to make the right long-term choices. Specifically: Memory & chat history → What’s the best way to handle this (like GPTs with chat history like on side panel)? Do you prefer DB-backed memory, vector stores, custom session management, or built-in framework memory? Model switching → How do you reliably swap between different LLMs (OpenAI, Anthropic, open-source)? Do you rely on LangChain abstractions, or write your own router functions? Stack choice → Are you sticking with LangChain/LangGraph, or rolling your own orchestration layer for more control? Why? Reliability → For production systems (where reliability matters more than quick prototypes), what practices are you following that actually work long-term? I’m trying to understand what has worked well in the wild versus what looks good in demos. Any real-world war stories, architectural tips, or “don’t make this mistake” lessons would be hugely appreciated. Thanks
2025-10-07T17:19:49
https://www.reddit.com/r/LocalLLaMA/comments/1o0l0yz/best_practices_for_building_productionlevel/
Funny_Working_7490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0l0yz
false
null
t3_1o0l0yz
/r/LocalLLaMA/comments/1o0l0yz/best_practices_for_building_productionlevel/
false
false
self
1
null
Granite 4.0 on iGPU AMD Ryzen 6800H llama.cpp benchmark
27
New MoE model for testing: Granite-4.0-H-Small is a 32B parameter, 9B active and long-context instruct model [unsloth](https://huggingface.co/unsloth/granite-4.0-h-small-GGUF) System: Kubuntu 25.10 OS, Kernel 6.17.0-5-generic with 64GB DDR5 ram. AMD Radeon Graphics (RADV REMBRANDT) Ryzen 6800H and 680M iGPU Llama.cpp Vulkan build: ca71fb9b ([6692](https://github.com/ggml-org/llama.cpp/releases/tag/b6692)) granite-4.0-h-small-UD-Q8\_K\_XL.gguf |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |granitehybrid ?B Q8\_0|35.47 GiB|32.21 B|Vulkan|99|pp512|72.56 ± 0.79| |granitehybrid ?B Q8\_0|35.47 GiB|32.21 B|Vulkan|99|tg128|4.26 ± 0.49| granite-4.0-h-small-UD-Q6\_K\_XL.gguf |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |granitehybrid ?B Q6\_K|25.95 GiB|32.21 B|Vulkan|99|pp512|54.77 ± 1.87| |granitehybrid ?B Q6\_K|25.95 GiB|32.21 B|Vulkan|99|tg128|5.51 ± 0.49| granite-4.0-h-small-UD-Q5\_K\_XL.gguf |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |granitehybrid ?B Q5\_K - Medium|21.53 GiB|32.21 B|Vulkan|99|pp512|57.90 ± 4.46| |granitehybrid ?B Q5\_K - Medium|21.53 GiB|32.21 B|Vulkan|99|tg128|6.36 ± 0.02| granite-4.0-h-small-UD-Q4\_K\_XL.gguf |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |granitehybrid ?B Q4\_K - Medium|17.49 GiB|32.21 B|Vulkan|99|pp512|57.26 ± 2.02| |granitehybrid ?B Q4\_K - Medium|17.49 GiB|32.21 B|Vulkan|99|tg128|7.21 ± 0.01| granite-4.0-h-small-IQ4\_XS.gguf |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |granitehybrid ?B IQ4\_XS - 4.25 bpw|16.23 GiB|32.21 B|Vulkan|99|pp512|57.31 ± 2.65| |granitehybrid ?B IQ4\_XS - 4.25 bpw|16.23 GiB|32.21 B|Vulkan|99|tg128|7.17 ± 0.01| Add this for comparison: |model|size|params|t/s (pp512)|t/s (tg128)| |:-|:-|:-|:-|:-| |qwen3moe 30B.A3B Q4\_K|17.28|30.53 B|134.46 ± 0.45|28.26 ± 0.46| Simplified view: |model|size|params|t/s (pp512)|t/s (tg128)| |:-|:-|:-|:-|:-| |granitehybrid\_Q8\_0|35.47 GiB|32.21 B|72.56 ± 0.79|4.26 ± 0.49| |granitehybrid\_Q6\_K|25.95 GiB|32.21 B|54.77 ± 1.87|5.51 ± 0.49| |granitehybrid\_Q5\_K - Medium|21.53 GiB|32.21 B|57.90 ± 4.46|6.36 ± 0.02| |granitehybrid\_Q4\_K - Medium|17.49 GiB|32.21 B|57.26 ± 2.02|7.21 ± 0.01| iGPU has flexibility of using system RAM as VRAM and can load larger models 32B and take advantage of using active parameters 9B to get decent speed from bigger parameter models. Looks like using Q8\_K\_XL has prompt processing benefit and Q5\_K\_XL for balance of speed on both sides of inference. Post here if you have an iGPU results to compare.
2025-10-07T17:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1o0kwx3/granite_40_on_igpu_amd_ryzen_6800h_llamacpp/
tabletuser_blogspot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0kwx3
false
null
t3_1o0kwx3
/r/LocalLLaMA/comments/1o0kwx3/granite_40_on_igpu_amd_ryzen_6800h_llamacpp/
false
false
self
27
{'enabled': False, 'images': [{'id': '3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?width=108&crop=smart&auto=webp&s=0bb7c0df7a0bc9f0b6bc93bfa7e08ee00f63fab5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?width=216&crop=smart&auto=webp&s=5647c388f1f7d5424085086e8ad45cb9c632c33d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?width=320&crop=smart&auto=webp&s=247d7ddeb7e72a5761f3cce8325fb12a0349c6d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?width=640&crop=smart&auto=webp&s=cd3accc5824c5752340fa58963fe5f7eaf781259', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?width=960&crop=smart&auto=webp&s=aca01030eea7c9eabce72d77eeff61b3282c9d9f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?width=1080&crop=smart&auto=webp&s=e3d2f25bc44c29021bf9e8dc2a68e00f01987ab0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3yd0Uq_pqWU-_yySKhanlCsTWZJSZGtIB11WYgR7EfM.png?auto=webp&s=76a6c558f951ac07839c50bbe4c3d0da30714a45', 'width': 1200}, 'variants': {}}]}
Best ways to run Qwen3 on CPU with 16 GB RAM
8
Any further technique than Quantization?
2025-10-07T16:58:23
https://www.reddit.com/r/LocalLLaMA/comments/1o0kf1o/best_ways_to_run_qwen3_on_cpu_with_16_gb_ram/
Remarkable_Story_310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0kf1o
false
null
t3_1o0kf1o
/r/LocalLLaMA/comments/1o0kf1o/best_ways_to_run_qwen3_on_cpu_with_16_gb_ram/
false
false
self
8
null
ryzen 395+ with 96gb on sale sale for $1728
54
Been watching mini PCs and this is $600 off
2025-10-07T16:52:21
https://www.amazon.com/GMKtec_ryzen_ai_max_395_mini_pc/dp/B0FLDJBS79?crid=1Q219TSIMC6E5&dib=eyJ2IjoiMSJ9.I2mCpbFC-I1kX_zpKzZjAVsC3UFEmWuAGsNmMRg4JjW-m65FEqL2voOm1dEASZH9A7BoEcVRQFBh4B8XK42Pd3cmsD6a0J3Puup9S6jg7SKf9mcVXlN4AxOZU88HfLTVjyD2uDnYWQg1dXLvo8EC33ImbdTdRO6_DV1m7kDC5Xo.DOidGG2jBnTmZefIX55ouM3iX6383KiTOjHDcWIesxo&dib_tag=se&keywords=395%2B%2Bmax&qid=1759855684&sprefix=395%2B%2Caps%2C130&sr=8-5&th=1
thebadslime
amazon.com
1970-01-01T00:00:00
0
{}
1o0k8vf
false
null
t3_1o0k8vf
/r/LocalLLaMA/comments/1o0k8vf/ryzen_395_with_96gb_on_sale_sale_for_1728/
false
false
default
54
{'enabled': False, 'images': [{'id': 'wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=108&crop=smart&auto=webp&s=c7ef9713fb4fbf51d0d7da30fb558f95324a395b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=216&crop=smart&auto=webp&s=70f4ef0366eafa569960666b4537977954dc4da4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=320&crop=smart&auto=webp&s=e88e6f574ea2b6abf3644be5140a1ed8ad6d613c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=640&crop=smart&auto=webp&s=290ace7209dd3df0a237ec970a6a8b1662d523e1', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=960&crop=smart&auto=webp&s=421952297faebb04d1038184216c053ab1f0bb56', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=1080&crop=smart&auto=webp&s=2e3704dd3e397c6dbebe004c6cce33e8cd82d316', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?auto=webp&s=8cdb17f0919f23f3fc3c0bd9dac21cd40118adda', 'width': 1910}, 'variants': {}}]}
Thinking about switching from ChatGPT Premium to Ollama. Is a Tesla P40 worth it?
0
Hey folks, I’ve been a ChatGPT Premium user for quite a while now. I use it mostly for IT-related questions, occasional image generation, and *a lot* of programming help, debugging, code completion, and even solving full programming assignments. At work, I’m using Claude integrated into Copilot, which honestly works really, *really* well. But for personal reasons (mainly cost and privacy), I’m planning to move away from cloud-based AI tools and switch to **Ollama** for local use. I’ve already played around with it a bit on my PC (RTX 3070, 8GB VRAM). The experience has been "*okay"* so far, some tasks work surprisingly well, but it definitely hits its limits quickly, especially with more complex or abstract problems that don’t have a clear solution path. That’s why I’m now thinking about upgrading my GPU and adding it to my homelab setup. I’ve been looking at the **NVIDIA Tesla P40**. From what I’ve read, it seems like a decent option for running larger models, and the price/performance ratio looks great, especially if I can find a good deal on eBay. I can’t afford a dual or triple GPU setup, so I’d be running just one card. I’ve also read that with a bit of tuning and scripting, you can get idle power consumption down to around 10–15W, which sounds pretty solid. So here’s my main question: **Do you think a Tesla P40 is capable of replacing something like ChatGPT Premium for coding and general-purpose AI use?** Can I get anywhere close to ChatGPT or Claude-level performance with that kind of hardware? Is it worth the investment if my goal is to switch to a fully local setup? I’m aware it won’t be as fast or as polished as cloud models, but I’m curious how far I can realistically push it. Thanks in advance for your insights!
2025-10-07T16:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1o0jeze/thinking_about_switching_from_chatgpt_premium_to/
xXjojoJoshXx1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0jeze
false
null
t3_1o0jeze
/r/LocalLLaMA/comments/1o0jeze/thinking_about_switching_from_chatgpt_premium_to/
false
false
self
0
null
Qwen3-Omni
2
I was watching a Fireship video on Qwen and they all look great especially Qwen3-Omni that looks great. I was wondering could it be uncensored and unrestricted like Eric Hartford's Cognitive Computations Dolphin models that uses Mistral & Deepseek models (Mistral small 24B). That would truly be incredible as if will be able to see, hear, talk and write whatever you want.
2025-10-07T16:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1o0jdvl/qwen3omni/
Time-Teaching1926
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0jdvl
false
null
t3_1o0jdvl
/r/LocalLLaMA/comments/1o0jdvl/qwen3omni/
false
false
self
2
null
Grok heavy
3
Does anyone know of an open source project that emulates the grok heavy process with other models using openai compatible endpoints? Something similar to this: [https://github.com/Leezekun/MassGen](https://github.com/Leezekun/MassGen)
2025-10-07T15:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1o0is58/grok_heavy/
Objective-Good310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0is58
false
null
t3_1o0is58
/r/LocalLLaMA/comments/1o0is58/grok_heavy/
false
false
self
3
{'enabled': False, 'images': [{'id': 'X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?width=108&crop=smart&auto=webp&s=8c9bac233938b5cd0b9811c6fe735f86681a5df3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?width=216&crop=smart&auto=webp&s=0579d67ef3df1bccf59105956175272a561e7ccf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?width=320&crop=smart&auto=webp&s=d5e5bc6b19f604a1fc08f260608f66aacbd74f11', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?width=640&crop=smart&auto=webp&s=043caf0d03becc0c64907759a9cc8d59e9e66849', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?width=960&crop=smart&auto=webp&s=931436965742dbf563762485ae6ce419d248d634', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?width=1080&crop=smart&auto=webp&s=125159a3a5a2d95bb7c612ffc85f973d236d339d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X0tGkHPSmjF4MS-pdUbmlpkO3V4RCtMErYdMR-GCe9I.png?auto=webp&s=aa9d69dbd162a456870143cff11b1520c2b861f7', 'width': 1200}, 'variants': {}}]}
Glm 4.6 air is coming
834
2025-10-07T15:46:04
https://i.redd.it/nmwtp72fnptf1.png
Namra_7
i.redd.it
1970-01-01T00:00:00
0
{}
1o0ifyr
false
null
t3_1o0ifyr
/r/LocalLLaMA/comments/1o0ifyr/glm_46_air_is_coming/
false
false
default
834
{'enabled': True, 'images': [{'id': 'nmwtp72fnptf1', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?width=108&crop=smart&auto=webp&s=1cf3d657cbae7c77a8849d1030044e279dd6a068', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?width=216&crop=smart&auto=webp&s=84d33a937d5b84176dc3e4e776c28b0f298d9d4b', 'width': 216}, {'height': 420, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?width=320&crop=smart&auto=webp&s=15090e96fcc95f9494dbf7a9b3a2001f3b34f3e0', 'width': 320}, {'height': 841, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?width=640&crop=smart&auto=webp&s=78e29ea88c42c50216e45dc228bec7e885394f0c', 'width': 640}, {'height': 1262, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?width=960&crop=smart&auto=webp&s=5a95ee57defe0a19a099b3d8c2c8de548bd506f9', 'width': 960}, {'height': 1420, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?width=1080&crop=smart&auto=webp&s=dc2a6e8ce4d83185420a963815b4fc9333383f33', 'width': 1080}], 'source': {'height': 1420, 'url': 'https://preview.redd.it/nmwtp72fnptf1.png?auto=webp&s=d59e534c4882aab841cee3fabaccffb6493d27df', 'width': 1080}, 'variants': {}}]}
ootl > How is the current state of gguf>cpp VS mlx on Mac?
1
Subject is self explanatory, but I've been out of the loop for about 6 months. My latest rig build is a paltry compared to the general chad here: \-32gb 5090 with 96gb-ram but I only have models that match the size of my MBPmax3 with 36gbram. How can I get this little rig pig PC into the llama.cpp train for better performing inference?
2025-10-07T15:36:08
https://www.reddit.com/r/LocalLLaMA/comments/1o0i66q/ootl_how_is_the_current_state_of_ggufcpp_vs_mlx/
Front-Plankton-9115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0i66q
false
null
t3_1o0i66q
/r/LocalLLaMA/comments/1o0i66q/ootl_how_is_the_current_state_of_ggufcpp_vs_mlx/
false
false
self
1
null
Will DDR6 be the answer to LLM?
146
Bandwidth doubles every generation of system memory. And we need that for LLMs. If DDR6 is going to be 10000+ MT/s easily, and then dual channel and quad channel would boast that even more. Maybe we casual AI users would be able to run large models around 2028. Like deepseek sized full models in a chat-able speed. And the workstation GPUs will only be worth buying for commercial use because they serve more than one user at a time.
2025-10-07T15:34:17
https://www.reddit.com/r/LocalLLaMA/comments/1o0i4fz/will_ddr6_be_the_answer_to_llm/
fungnoth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0i4fz
false
null
t3_1o0i4fz
/r/LocalLLaMA/comments/1o0i4fz/will_ddr6_be_the_answer_to_llm/
false
false
self
146
null
Building Auditable AI Systems for Healthcare Compliance: Why YAML Orchestration Matters
0
# Building Auditable AI Systems for Healthcare Compliance: Why YAML Orchestration Matters I've been working on AI systems that need full audit trails, and I wanted to share an approach that's been working well for regulated environments. ## The Problem In healthcare (and finance/legal), you can't just throw LangChain at a problem and hope for the best. When a system makes a decision that affects patient care, you need to answer: 1. **What data was used?** (memory retrieval trace) 2. **What reasoning process occurred?** (agent execution steps) 3. **Why this conclusion?** (decision logic) 4. **When did this happen?** (temporal audit trail) Most orchestration frameworks treat this as an afterthought. You end up writing custom logging, building observability layers, and still struggling to explain what happened three weeks ago. ## A Different Approach I've been using [OrKa-Reasoning](https://github.com/marcosomma/orka-reasoning), which takes a YAML-first approach. Here's why this matters for regulated use cases: **Declarative workflows = auditable by design** - Every agent, every decision point, every memory operation is declared upfront - No hidden logic buried in Python code - Compliance teams can review workflows without being developers **Built-in memory with decay semantics** - Automatic separation of short-term and long-term memory - Configurable retention policies per namespace - Vector + hybrid search with similarity thresholds **Structured tracing without instrumentation** - Every agent execution is logged with metadata - Loop iterations tracked with scores and thresholds - GraphScout provides decision transparency for routing ## Real Example: Clinical Decision Support Here's a workflow for analyzing patient symptoms with full audit requirements: ```yaml orchestrator: id: clinical-decision-support strategy: sequential memory_preset: "episodic" agents: - patient_history_retrieval - symptom_analysis_loop - graphscout_specialist_router agents: # Retrieve relevant patient history with audit trail - id: patient_history_retrieval type: memory memory_preset: "episodic" namespace: patient_records metadata: retrieval_timestamp: "{{ timestamp }}" query_type: "clinical_history" prompt: | Patient context for: {{ input }} Retrieve relevant medical history, prior diagnoses, and treatment responses. # Iterative analysis with quality gates - id: symptom_analysis_loop type: loop max_loops: 3 score_threshold: 0.85 # High bar for clinical confidence score_extraction_config: strategies: - type: pattern patterns: - "CONFIDENCE_SCORE:\\s*([0-9.]+)" - "ANALYSIS_COMPLETENESS:\\s*([0-9.]+)" past_loops_metadata: analysis_round: "{{ get_loop_number() }}" confidence: "{{ score }}" timestamp: "{{ timestamp }}" internal_workflow: orchestrator: id: symptom-analysis-internal strategy: sequential agents: - differential_diagnosis - risk_assessment - evidence_checker - confidence_moderator - audit_logger agents: - id: differential_diagnosis type: local_llm model: llama3.2 provider: ollama temperature: 0.1 # Conservative for medical prompt: | Patient History: {{ get_agent_response('patient_history_retrieval') }} Symptoms: {{ get_input() }} Provide differential diagnosis with evidence from patient history. Format: - Condition: [name] - Probability: [high/medium/low] - Supporting Evidence: [specific patient data] - Contradicting Evidence: [specific patient data] - id: risk_assessment type: local_llm model: llama3.2 provider: ollama temperature: 0.1 prompt: | Differential: {{ get_agent_response('differential_diagnosis') }} Assess: 1. Urgency level (emergency/urgent/routine) 2. Risk factors from patient history 3. Required immediate actions 4. Red flags requiring escalation - id: evidence_checker type: search prompt: | Clinical guidelines for: {{ get_agent_response('differential_diagnosis') | truncate(100) }} Verify against current medical literature and guidelines. - id: confidence_moderator type: local_llm model: llama3.2 provider: ollama temperature: 0.05 prompt: | Assessment: {{ get_agent_response('differential_diagnosis') }} Risk: {{ get_agent_response('risk_assessment') }} Guidelines: {{ get_agent_response('evidence_checker') }} Rate analysis completeness (0.0-1.0): CONFIDENCE_SCORE: [score] ANALYSIS_COMPLETENESS: [score] GAPS: [what needs more analysis if below {{ get_score_threshold() }}] RECOMMENDATION: [proceed or iterate] - id: audit_logger type: memory memory_preset: "clinical" config: operation: write vector: true namespace: audit_trail decay: enabled: true short_term_hours: 720 # 30 days minimum long_term_hours: 26280 # 3 years for compliance prompt: | Clinical Analysis - Round {{ get_loop_number() }} Timestamp: {{ timestamp }} Patient Query: {{ get_input() }} Diagnosis: {{ get_agent_response('differential_diagnosis') | truncate(200) }} Risk: {{ get_agent_response('risk_assessment') | truncate(200) }} Confidence: {{ get_agent_response('confidence_moderator') }} # Intelligent routing to specialist recommendation - id: graphscout_specialist_router type: graph-scout params: k_beam: 3 max_depth: 2 - id: emergency_protocol type: local_llm model: llama3.2 provider: ollama temperature: 0.1 prompt: | EMERGENCY PROTOCOL ACTIVATION Analysis: {{ get_agent_response('symptom_analysis_loop') }} Provide immediate action steps, escalation contacts, and documentation requirements. - id: specialist_referral type: local_llm model: llama3.2 provider: ollama prompt: | SPECIALIST REFERRAL Analysis: {{ get_agent_response('symptom_analysis_loop') }} Recommend appropriate specialist(s), referral priority, and required documentation. - id: primary_care_management type: local_llm model: llama3.2 provider: ollama temperature: 0.1 prompt: | PRIMARY CARE MANAGEMENT PLAN Analysis: {{ get_agent_response('symptom_analysis_loop') }} Provide treatment plan, monitoring schedule, and patient education points. - id: monitoring_protocol type: local_llm model: llama3.2 provider: ollama temperature: 0.1 prompt: | MONITORING PROTOCOL Analysis: {{ get_agent_response('symptom_analysis_loop') }} Define monitoring parameters, follow-up schedule, and escalation triggers. ``` ## What This Enables **For Compliance Teams:** - Review workflows in YAML without reading code - Audit trails automatically generated - Memory retention policies explicit and configurable - Every decision point documented **For Developers:** - No custom logging infrastructure needed - Memory operations standardized - Loop logic with quality gates built-in - GraphScout makes routing decisions transparent **For Clinical Users:** - Understand why system made recommendations - See what patient history was used - Track confidence scores across iterations - Clear escalation pathways ## Why Not LangChain/CrewAI? **LangChain**: Great for prototyping, but audit trails require significant custom work. Chains are code-based, making compliance review harder. Memory is external and manual. **CrewAI**: Agent-based model is powerful but less transparent for compliance. Role-based agents don't map cleanly to audit requirements. Execution flow harder to predict and document. **OrKa**: Declarative workflows are inherently auditable. Built-in memory with retention policies. Loop execution with quality gates. GraphScout provides decision transparency. ## Trade-offs **OrKa isn't better for everything:** - Smaller ecosystem (fewer integrations) - YAML can get verbose for complex workflows - Newer project (less battle-tested) - Requires Redis for memory **But for regulated industries:** - Audit requirements are first-class, not bolted on - Explainability by design - Compliance review without deep technical knowledge - Memory retention policies explicit ## Installation ```bash pip install orka-reasoning orka-start # Starts Redis orka run clinical-decision-support.yml "patient presents with..." ``` ## Repository Full examples and docs: https://github.com/marcosomma/orka-reasoning If you're building AI for healthcare, finance, or legal—where "trust me, it works" isn't good enough—this approach might be worth exploring. Happy to answer questions about implementation or specific use cases.
2025-10-07T15:33:09
https://www.reddit.com/r/LocalLLaMA/comments/1o0i3bm/building_auditable_ai_systems_for_healthcare/
marcosomma-OrKA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0i3bm
false
null
t3_1o0i3bm
/r/LocalLLaMA/comments/1o0i3bm/building_auditable_ai_systems_for_healthcare/
false
false
self
0
{'enabled': False, 'images': [{'id': 'OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?width=108&crop=smart&auto=webp&s=06e2123d96b5f15756945a36dc9856b3a1bfd30b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?width=216&crop=smart&auto=webp&s=6246e6bdb4b92a243875eef453c8ac430e4d9b6b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?width=320&crop=smart&auto=webp&s=f496dfa1acd260797b58babf3d9f57de1368bfa1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?width=640&crop=smart&auto=webp&s=b7e9fa80d518c42b06cfdfb556a9ddbf020aa8e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?width=960&crop=smart&auto=webp&s=1e7e26f54cbc6d1fa5ee3f2fea328e6be4b7508b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?width=1080&crop=smart&auto=webp&s=a5a9ce59c4c1a3b531bb933dbe3734c39dec8f8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OCx8tcjIAvWSBfv0btoedL_sN5Z_6qwF1gsamtGXjtw.png?auto=webp&s=806e429fe03506cda074738c9d20d008a57bb456', 'width': 1200}, 'variants': {}}]}
PipesHub Explainable AI now supports image citations along with text
1
We added explainability to our Agentic RAG pipeline few months back. Our new release can cite not only text but also images and charts. The AI now shows **pinpointed citations** down to the **exact paragraph, table row, or cell, image** it used to generate its answer. It doesn’t just name the source file but also **highlights the exact text** and lets you **jump directly to that part of the document**. This works across formats: PDFs, Excel, CSV, Word, PowerPoint, Markdown, and more. It makes AI answers easy to **trust and verify**, especially in messy or lengthy enterprise files. You also get insight into the **reasoning** behind the answer. It’s fully open-source: [https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai) Would love to hear your thoughts or feedback! I am also planning to write a detailed technical blog next week explaining how exactly we built this system and why everyone needs to stop converting full documents directly to markdown. https://preview.redd.it/lput8pemkptf1.png?width=3406&format=png&auto=webp&s=6f037cfa8fa9f0da04eafeec9af5b131a5fa6896
2025-10-07T15:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1o0i0u9/pipeshub_explainable_ai_now_supports_image/
Effective-Ad2060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0i0u9
false
null
t3_1o0i0u9
/r/LocalLLaMA/comments/1o0i0u9/pipeshub_explainable_ai_now_supports_image/
false
false
https://b.thumbs.redditm…R3S_V1RTHZis.jpg
1
{'enabled': False, 'images': [{'id': 'hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?width=108&crop=smart&auto=webp&s=63a546b8ac654187ee9b0d14224e852ef0c3d692', 'width': 108}], 'source': {'height': 99, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?auto=webp&s=47e8987d3d53065768b4c796fa5af51c7a36d470', 'width': 111}, 'variants': {}}]}
3090 + 128GB DDR4 worth it?
3
I have an RTX 3090 with 16GB of DDR4. I was wondering if I should upgrade to 128GB of DDR4? Or is it not worthwhile and I need to get a DDR5 motherboard + RAM? Will I see a massive difference between them? What models will 128GB RAM open up for me if I do the upgrade? Thanks!
2025-10-07T15:19:17
https://www.reddit.com/r/LocalLLaMA/comments/1o0hpfw/3090_128gb_ddr4_worth_it/
randomsolutions1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0hpfw
false
null
t3_1o0hpfw
/r/LocalLLaMA/comments/1o0hpfw/3090_128gb_ddr4_worth_it/
false
false
self
3
null
Least politically biased LLM?
0
Currently, what is the least politically biased, most performant LLM available? I want to have an honest conversation about the Middle East without guardrails or it imposing its opinions. I presume this would be an open source model? (Maybe Chinese?)
2025-10-07T15:02:57
https://www.reddit.com/r/LocalLLaMA/comments/1o0h98i/least_politically_biased_llm/
DelPrive235
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0h98i
false
null
t3_1o0h98i
/r/LocalLLaMA/comments/1o0h98i/least_politically_biased_llm/
false
false
self
0
null
Which is the best AI API for coding, and which is the best open-source LLM for coding?
0
Hey everyone, I’ve been exploring different AI tools for coding — mainly for code generation, debugging, and explaining code. There are so many APIs and open-source LLMs out there now (like Claude, GPT, Mistral, Gemma, CodeLlama, etc.), and I’m trying to figure out which ones actually perform best for real-world coding tasks. So I’d love to hear from you: Which AI API do you think is the most powerful or reliable for coding? (accuracy, speed, and developer support) Which open-source LLM works best for local or self-hosted setups — especially for writing and understanding code? Looking forward to your suggestions! 🙌
2025-10-07T14:57:03
https://www.reddit.com/r/LocalLLaMA/comments/1o0h37j/which_is_the_best_ai_api_for_coding_and_which_is/
ThickVirus2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0h37j
false
null
t3_1o0h37j
/r/LocalLLaMA/comments/1o0h37j/which_is_the_best_ai_api_for_coding_and_which_is/
false
false
self
0
null
Can we run qwen-coder-30b in gcollab and use it as an api?
2
Hey everyone, I want to run qwen code cli in my pc, i know they also have a generous limit of 2000 req per day but still, I have always have this thought whatif if I can run it 24x7 without limits. As I don't have a decent graphic card I can't run llm models even 4b run very slowly. So I thought if I can use Google collab and use it as an api in any vibe coding agent . Is it possible?
2025-10-07T14:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1o0gvra/can_we_run_qwencoder30b_in_gcollab_and_use_it_as/
WrongdoerAway7602
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0gvra
false
null
t3_1o0gvra
/r/LocalLLaMA/comments/1o0gvra/can_we_run_qwencoder30b_in_gcollab_and_use_it_as/
false
false
self
2
null
Introducing llms.py – OSS Local OpenAI Chat UI, Client and Server
1
[removed]
2025-10-07T14:39:00
https://servicestack.net/posts/llms-py-ui
mythz
servicestack.net
1970-01-01T00:00:00
0
{}
1o0glzp
false
null
t3_1o0glzp
/r/LocalLLaMA/comments/1o0glzp/introducing_llmspy_oss_local_openai_chat_ui/
false
false
default
1
null
What are some good frontends to use on an android phone? (native app only and preferably FOSS)
9
I'm tired of PWA's they're buggy and you can just feel when something was designed to be used with a mouse and keyboard. Something you can use with both Local and OpenRoute/r API.
2025-10-07T14:25:53
https://www.reddit.com/r/LocalLLaMA/comments/1o0g9i1/what_are_some_good_frontends_to_use_on_an_android/
Striking_Wedding_461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0g9i1
false
null
t3_1o0g9i1
/r/LocalLLaMA/comments/1o0g9i1/what_are_some_good_frontends_to_use_on_an_android/
false
false
self
9
null
Uncensored Cloud LLM
0
I’ve searched a lot but couldn’t find one could someone share if they actually know a good one?
2025-10-07T14:12:09
https://www.reddit.com/r/LocalLLaMA/comments/1o0fwck/uncensored_cloud_llm/
kyre_ee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0fwck
false
null
t3_1o0fwck
/r/LocalLLaMA/comments/1o0fwck/uncensored_cloud_llm/
false
false
self
0
null
MCP server to manage a GMAIL account
0
Hi Everyone, i'm looking for a simple way to automate a gmail account with LMstudio . I receive a ton of messages asking for quotation, and i need a simple way to automatically reply with information on my products, and send me report of the replied mails. I used [Make.com](http://Make.com) but easily went our of credit for the amount of mail i receive. There's a simple tool i can use with LmStudio to do this? I'm not particularly expert, so i would need something very easy to configure and install on a decent machine (9800x3d , 5090) Any suggestion?
2025-10-07T14:06:42
https://www.reddit.com/r/LocalLLaMA/comments/1o0fr0w/mcp_server_to_manage_a_gmail_account/
FinnedSgang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0fr0w
false
null
t3_1o0fr0w
/r/LocalLLaMA/comments/1o0fr0w/mcp_server_to_manage_a_gmail_account/
false
false
self
0
null
Best Models for Summarizing a lot of Content?
1
Most posts about this topic seem quite a bit dated , and since im not really on top of the news i thought this could be useful to others as well. I have an absolute sh\*t load of study material i have to chew throught , the problem is the material isnt exactly well structured and very repetitive . Is there a local model that i can feed a template for this purpose , preferably on the smaller side of say 7B , maybe slightly bigger is fine too. Or should i stick to one of the bigger online hosted variants for this ?
2025-10-07T14:00:53
https://www.reddit.com/r/LocalLLaMA/comments/1o0fl7i/best_models_for_summarizing_a_lot_of_content/
TheCatDaddy69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0fl7i
false
null
t3_1o0fl7i
/r/LocalLLaMA/comments/1o0fl7i/best_models_for_summarizing_a_lot_of_content/
false
false
self
1
null
Top performing models across 4 professions covered by APEX
9
2025-10-07T13:59:21
https://i.redd.it/tt20ohtd4ptf1.jpeg
RaselMahadi
i.redd.it
1970-01-01T00:00:00
0
{}
1o0fjpa
false
null
t3_1o0fjpa
/r/LocalLLaMA/comments/1o0fjpa/top_performing_models_across_4_professions/
false
false
default
9
{'enabled': True, 'images': [{'id': 'tt20ohtd4ptf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?width=108&crop=smart&auto=webp&s=6802ede0d7aa0e0adbf9be5186926f8dc8ceabbd', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?width=216&crop=smart&auto=webp&s=313873aff44a9885575c792cc4810f9a9edf2d8c', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?width=320&crop=smart&auto=webp&s=5930316bea88304a7fd72af42fe107cf6ce1aba9', 'width': 320}, {'height': 367, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?width=640&crop=smart&auto=webp&s=44c9eea4438984a3bfb8f2c4c0087eeb1920ef40', 'width': 640}, {'height': 551, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?width=960&crop=smart&auto=webp&s=99b4bcb6228efee83ff03e218763db8fbd7de6b2', 'width': 960}, {'height': 620, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?width=1080&crop=smart&auto=webp&s=c9b5b65987bb8764c07b32a23256d443e14d29d4', 'width': 1080}], 'source': {'height': 632, 'url': 'https://preview.redd.it/tt20ohtd4ptf1.jpeg?auto=webp&s=df58a05f580e93463ede5d1667b3da7e1b4806a0', 'width': 1100}, 'variants': {}}]}
Is a Macbook M2 Max 96GB really the best linux laptop to run local LLMs on?
0
Hi! I'm a digital nomad software engineer, and I'm considering to buy a new laptop for work, which must be 14" or under. All my workflows are linux-exclusive, so I won't use macOS or Windows. I'm looking into something that: * Has decent linux support, any distro will do * Can handle 32B and hopefully also 70B models, mainly for local software development * Image generation (stable diffusion) on the go * Decent efficiency The only options currenly seem to be: * AMD AI max+ 395 on HP 14" laptop with 128GB at around 250GB/s, for a whopping $4000 * AMD AI max+ 395 on 13" laptop with 128GB at around 250GB/s, for less than $3000 but it looks like it thermal throttles and battery sucks * M2 Max 96GB 14": speed of around 400GB/s, worse than 16" but not by much, unknown performance on Linux (M3 and M4 do not support linux) refurbished for around $2500, decent battery even with full AI workload (at least on macOS) Am I crazy or is M2 Max really the best option out there for my use, and can it actually be used properly as an AI coding assistant? Is there any better non-apple laptop that will come in the near future?
2025-10-07T13:52:21
https://www.reddit.com/r/LocalLLaMA/comments/1o0fddw/is_a_macbook_m2_max_96gb_really_the_best_linux/
esamueb32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0fddw
false
null
t3_1o0fddw
/r/LocalLLaMA/comments/1o0fddw/is_a_macbook_m2_max_96gb_really_the_best_linux/
false
false
self
0
null
Hi folks, sorry for the self‑promo. I’ve built an open‑source project that could be useful to some of you
235
**TL;DR**: Web dashboard for NVIDIA GPUs with 30+ real-time metrics (utilisation, memory, temps, clocks, power, processes). Live charts over WebSockets, multi‑GPU support, and one‑command Docker deployment. No agents, minimal setup. Repo: [https://github.com/psalias2006/gpu-hot](https://github.com/psalias2006/gpu-hot) **Why I built it** * Wanted simple, real‑time visibility without standing up a full metrics stack. * Needed clear insight into temps, throttling, clocks, and active processes during GPU work. * A lightweight dashboard that’s easy to run at home or on a workstation. **What it does** * Polls nvidia-smi and streams 30+ metrics every \~2s via WebSockets. * Tracks per‑GPU utilization, memory (used/free/total), temps, power draw/limits, fan, clocks, PCIe, P‑State, encoder/decoder stats, driver/VBIOS, throttle status. * Shows active GPU processes with PIDs and memory usage. * Clean, responsive UI with live historical charts and basic stats (min/max/avg). **Setup (Docker)** git clone https://github.com/psalias2006/gpu-hot cd gpu-hot docker-compose up --build # open http://localhost:1312 Looking for feedback
2025-10-07T13:40:33
https://i.redd.it/1tzatvfz0ptf1.png
panos_s_
i.redd.it
1970-01-01T00:00:00
0
{}
1o0f2uf
false
null
t3_1o0f2uf
/r/LocalLLaMA/comments/1o0f2uf/hi_folks_sorry_for_the_selfpromo_ive_built_an/
false
false
default
235
{'enabled': True, 'images': [{'id': '1tzatvfz0ptf1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?width=108&crop=smart&auto=webp&s=77922522840bcd81a8d4c2814aba90edeccb7eb9', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?width=216&crop=smart&auto=webp&s=259485c8f4c3cc665670894b3372f3b1cdea7c28', 'width': 216}, {'height': 252, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?width=320&crop=smart&auto=webp&s=e0dc383961aa9284d5ccf6bd439e4651b38da47e', 'width': 320}, {'height': 504, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?width=640&crop=smart&auto=webp&s=149ffc98b84835693e3aa54c4c554277120de6ea', 'width': 640}, {'height': 757, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?width=960&crop=smart&auto=webp&s=936c66d9400797d9a5fdf1446314d263126a8c43', 'width': 960}, {'height': 851, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?width=1080&crop=smart&auto=webp&s=d5b504d2e19d33e99ca47bfb694b65693714672f', 'width': 1080}], 'source': {'height': 2098, 'url': 'https://preview.redd.it/1tzatvfz0ptf1.png?auto=webp&s=e2d5f4ce260b6b9e5225d6b2aa44d96710a64703', 'width': 2660}, 'variants': {}}]}
AI bubble might collapse economy ( Sam Altman Warns)
0
I guess I'm not surprised anymore that big companies keep speculating about AI in the way they like. Remember how Sam and Claude's CEOs told us not long ago that by 2025, all code would be written by AI? Well, the reality seems pretty different. I really enjoy my local LLM models. They're great for simple tasks, but for anything complex, it's usually much faster and often provides better results to handle it myself. Look, I'm not saying there aren't a ton of bad, poorly made apps coming out thanks to AI, but there's been some confusion about what AI actually *can* do when it comes to apps. AI can definitely help you make a Minimum Viable Product (MVP), which is that early version to get an idea off the ground. It can't build a fully finished, polished, and scalable product that will work for anyone.. Long story short, here's a link to Sam Altman's speech: https://futurism.com/artificial-intelligence/sam-altman-warns-ai-industry-implosion
2025-10-07T13:28:15
https://v.redd.it/glqqxyxtyotf1
theundertakeer
v.redd.it
1970-01-01T00:00:00
0
{}
1o0es06
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/glqqxyxtyotf1/DASHPlaylist.mpd?a=1762435708%2CM2Q0MDNlZDg5YTRlNjY3OTc5ODdjZmNlYWMxMDVmYzVjZTdkM2E3ZmYzY2ZmMWYzZDQyNDEwMDJlYjc0NjQ5ZA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/glqqxyxtyotf1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 640, 'hls_url': 'https://v.redd.it/glqqxyxtyotf1/HLSPlaylist.m3u8?a=1762435708%2CNjFjNTcyNzdlMzBjYTk5ZjA3N2E1NDNjZmU4ZGVmMTExNWZjN2U3ODllODI0NzE0YWNiMThkN2U0NzFmZmMyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/glqqxyxtyotf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 360}}
t3_1o0es06
/r/LocalLLaMA/comments/1o0es06/ai_bubble_might_collapse_economy_sam_altman_warns/
false
false
https://external-preview…8dee1579484b0d2b
0
{'enabled': False, 'images': [{'id': 'cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM.png?width=108&crop=smart&format=pjpg&auto=webp&s=94b34d8ee0e4f83352ef5922ac2667a295915849', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM.png?width=216&crop=smart&format=pjpg&auto=webp&s=22a2a13ffbd70e0b472d9a584aad7c7ca09d54fb', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM.png?width=320&crop=smart&format=pjpg&auto=webp&s=42c19779272244d27851974252dd5d487265edb5', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM.png?width=640&crop=smart&format=pjpg&auto=webp&s=d2436e0d9277688896bbccd74dd6be2215d0d307', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM.png?width=960&crop=smart&format=pjpg&auto=webp&s=95c79689e20db73a4aa1e39de383d4ba8045f366', 'width': 960}], 'source': {'height': 1725, 'url': 'https://external-preview.redd.it/cHVza3JxMHV5b3RmMY7pHWP2tRkAlfspaxllBauytrRcK0i-_Z9DzwNb7KfM.png?format=pjpg&auto=webp&s=ff6cd0dc5780e0d97781adc06f5e6ac22167425c', 'width': 970}, 'variants': {}}]}
llm-registry - Track model capabilities, costs, and features across 15+ providers (OpenAI, Anthropic, Google, etc.)
5
Hey everyone! I built **LLM Registry** - a Python tool to manage LLM model metadata across multiple providers. **What it does:** Check a model's capabilities before making API calls, compare costs across providers, and maintain custom configurations. Tracks costs, features (streaming, tools, vision, JSON mode), API parameters, and context limits. **Why it exists:** No unified way to query model capabilities programmatically. You either hardcode this or check docs constantly. Messy when building multi-provider tools, comparing costs, or managing custom models. Includes 70+ verified models (OpenAI, Anthropic, Google, Cohere, Mistral, Meta, xAI, Amazon, Microsoft, DeepSeek, Ollama, etc.). Add your own too. **Built with:** Python 3.13+, Pydantic (data validation), Typer + Rich (CLI) **Quick example:** ```python from llm_registry import CapabilityRegistry registry = CapabilityRegistry() model = registry.get_model("gpt-5") print(f"Cost: ${model.token_costs.input_cost}/M tokens") ``` CLI: ```bash pip install llm-registry llmr list --provider openai llmr get gpt-5 --json ``` **Links:** - GitHub: https://github.com/yamanahlawat/llm-registry - PyPI: https://pypi.org/project/llm-registry/ Would love feedback or contributions! Let me know if you find this useful or have ideas for improvements.
2025-10-07T13:23:26
https://www.reddit.com/r/LocalLLaMA/comments/1o0ensk/llmregistry_track_model_capabilities_costs_and/
yamanahlawat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ensk
false
null
t3_1o0ensk
/r/LocalLLaMA/comments/1o0ensk/llmregistry_track_model_capabilities_costs_and/
false
false
self
5
{'enabled': False, 'images': [{'id': 'EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?width=108&crop=smart&auto=webp&s=7b9aced67487cb9c4b4364625563747013bec1d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?width=216&crop=smart&auto=webp&s=47d8154813d652699e40748c12ab00c3227eaea3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?width=320&crop=smart&auto=webp&s=7933e2a94ddbff560c4de2e2e3b3d5b8a2708c8e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?width=640&crop=smart&auto=webp&s=aade6d0206b4ff0345ffe7b7d02a9d5c785fc2d9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?width=960&crop=smart&auto=webp&s=b7c828e07253921ebb5a3c947e70e7b7fb02e404', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?width=1080&crop=smart&auto=webp&s=2cfa58be1f68021338481f91cea05473d8153907', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EBG3aJxeyFriW49RnRUcG2g1AmByqFD_yfI0TlwIgPU.png?auto=webp&s=7166aacc3fbe48146ca22e38509a74ae97313bbd', 'width': 1200}, 'variants': {}}]}
GitHub Copilot or Claude Code
0
Hello guys I want to purchase pro version of one of these tools. Which one will be better for writing self-healing test cases??? Can anyone please help me choose one of these tools. I need to integrate one of these with playwright MCP so that I can create test plans, test cases and also the system should be able to heal the failed test cases.
2025-10-07T13:10:02
https://www.reddit.com/r/LocalLLaMA/comments/1o0ecep/github_copilot_or_claude_code/
Eastern_Cockroach231
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0ecep
false
null
t3_1o0ecep
/r/LocalLLaMA/comments/1o0ecep/github_copilot_or_claude_code/
false
false
self
0
null
More love for GLM4.6 (evaluation vs. Claude 4.5 for NLP tasks)
86
I have been putting GLM4.6 and Claude 4.5 head to head relentlessly since both were released, and really can't overstate how impressive GLM4.6 is. I'm using both over OpenRouter. My use case: critically evaluating published AI literature, working on my own architecture ideas, summarizing large articles, picking through sprawling conversations for the salient ideas. What's really impressive to me is how good GLM4.6 is at following my instructions to the letter, understanding nuanced ways that I want it to analyze data, and avoiding putting its own spin on things. It's also absolutely fantastic at "thinking in character" (I use persona prompts to process information in parallel from different perspectives - ie. one run to critique literature and probe quality of experimental set-ups, another run to evaluate whether are creative implications that I'm missing, etc.) - this is a model that loves a great system prompt. The ability to shape the way GLM4.6 reasons is really impressive. The draw back in terms of persona prompting is that while GLM4.6 is great at functionally behaving according to the prompt, its tonal style usually drifts. I think this is more a factor of how MoE models process RP-adjacent prompting (I find that dense models are massively better at this) than it is a GLM4.6 problem specifically. GLM4.6 holds on to technical details of what I'm either reading or writing \*spectacularly\* well. It seems even more clear-headed than Claude when it comes to working on implementation ideas, or paying attention to implementation that I'm reading about. Claude Sonnet 4.5 is impressive in terms of its ability to follow a huge list of complicated topics across many turns. Of every LLM I have tried, this truly keeps its head together longer than any I've tried. I have pushed the context window ridiculously far and have only seen one or two minor factual errors. Exact instruction following (ie. system instructions about cognitive processing requirements) gets dulled over time, for sure. And while 4.5 seems far better at persona prompting than 4 did, there's an underlying Claude-ness that just can't be denied. Even without the obnoxious LCR stuff going on in the Anthropic UI (not to mention their shady data mining reversal), Claude can't help but lapse into Professor Dad mode. (Just like Gemini can't really avoid being a former high school valedictorian who got into an Ivy on a lacrosse scholarship while still suffering from imposter syndrome) GLM4.6 doesn't stay coherent quite as long - and there are some weird glitches: lapses into Chinese, confusing its reasoning layer for its response layer, and becoming repetitive in long responses (ie. saying the same thing twice). Still, it remains coherent FAR longer than Gemini 2.5 Pro. What I find really interesting about GLM4.6 is that it seems to have no overtly detectable ideological bias - it's really open, and depending on how you prompt it, can truly look at things from multiple perspectives. DeepSeek and Kimi K2 both have slants (which I happen to dig!) - this might be the most flexible model I have tried, period. If the lapse-into-chinese and repetitive loops could be stamped out a bit, this would be the no-brainer LLM to build with for what I do. (As always, with the caveat that I'm praying daily for a dense Gemma 3 or Gemma 4 model in the 50B+ range)
2025-10-07T12:53:15
https://www.reddit.com/r/LocalLLaMA/comments/1o0dy0y/more_love_for_glm46_evaluation_vs_claude_45_for/
LoveMind_AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0dy0y
false
null
t3_1o0dy0y
/r/LocalLLaMA/comments/1o0dy0y/more_love_for_glm46_evaluation_vs_claude_45_for/
false
false
self
86
null
Help setting up a RAG Pipeline.
2
Hello I am an Instrumentation Engineer and i have to deal with a lot a documents in the form of PDF, Word and large excel documents. I want to create a locally hosted LLM which can answer questions based on the documents I feed it. I have watched a lot of videos on how to do it. So far I have infered that the process is called RAG - Retrieval Augmented Generation. Basically documents are parsed, chunked and stored in vector database and LLM answers looking at the database. For parsing and chunking I have identified docling which I have installed on a server running Ubuntu 24.04 LTS with dual xeon CPUs and 178 GB of RAM, No GPU unfortunately. For webui, I have installed docling-serve. For LLM, I have gone with openweb-ui and I have tried phi3 and mistral 7b. I have tried to run docling so that it writes to the same db as openwebui but so far the answers have been very very wrong. I even tried to upload documents directly to the model. The answers are better but that not what I want to achieve. Do you guys have any insights on what can I do to 1. Feed documents and keep increasing the knowledge of LLM 2. Verify that knowledge is indeed getting updated 3. Improve answering accuracy of LLM
2025-10-07T12:46:25
https://www.reddit.com/r/LocalLLaMA/comments/1o0dsd8/help_setting_up_a_rag_pipeline/
vaibhavyagnik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0dsd8
false
null
t3_1o0dsd8
/r/LocalLLaMA/comments/1o0dsd8/help_setting_up_a_rag_pipeline/
false
false
self
2
null
Need a local model for parsing scanned documents (currently using Qwen 2.5vl 70B Q8) - better options?
6
Hey everyone, I’m looking for recommendations for a local model that can parse scanned documents (images) — ideally extracting both JSON values based on questions. Right now I’m running Qwen 2.5 70B Q8 locally, and while it’s decent for OCRd text, it’s struggling with lists and tables or mixed layouts. It MUST support latin with diacritics (eg. ščćž, etc)
2025-10-07T11:12:34
https://www.reddit.com/r/LocalLLaMA/comments/1o0bsk8/need_a_local_model_for_parsing_scanned_documents/
Puzzleheaded_Bus7706
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0bsk8
false
null
t3_1o0bsk8
/r/LocalLLaMA/comments/1o0bsk8/need_a_local_model_for_parsing_scanned_documents/
false
false
self
6
null
Help Needed: Local MP3 Translation Workflow (to English) Using Open-Source LLMs
3
I need help setting up a local translation workflow (to English) for MP3 audio using only open-source LLMs. I’ve tried this repo: \[https://github.com/kyutai-labs/delayed-streams-modeling\](https://github.com/kyutai-labs/delayed-streams-modeling) — it can convert speach-to-text with timestamps, but it doesn’t seem to support using timestamps for text-to-audio alignment. Any advice or examples on how to build a working pipeline for this?
2025-10-07T11:06:19
https://www.reddit.com/r/LocalLLaMA/comments/1o0bo9q/help_needed_local_mp3_translation_workflow_to/
Snoo-6077
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o0bo9q
false
null
t3_1o0bo9q
/r/LocalLLaMA/comments/1o0bo9q/help_needed_local_mp3_translation_workflow_to/
false
false
self
3
{'enabled': False, 'images': [{'id': '7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?width=108&crop=smart&auto=webp&s=9c2fb8cb757827223003409e3b09aa8ce56fcc90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?width=216&crop=smart&auto=webp&s=b1208aa7384a8e7052fb2e890b90e05c416e12bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?width=320&crop=smart&auto=webp&s=ca20654994ac59da55516263f6f14aae37648a35', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?width=640&crop=smart&auto=webp&s=7f8c815d52f73e425af9708d153c0bda65272564', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?width=960&crop=smart&auto=webp&s=db0652c31262a1e0e5f589ed1addbe854b8c6243', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?width=1080&crop=smart&auto=webp&s=b98ad6994133b9b0b69cd7ec333702e7fe07d8aa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Hig6bJQD8ncmPkWxmmhI2MHwyMvTcQJRxbqkrH1xvU.png?auto=webp&s=00ae57f5306840a02084bbb8828560dcb3e67cf2', 'width': 1200}, 'variants': {}}]}
Human or LLM? - Guess the human-written sentence
16
How many times can you find the human written texts?
2025-10-07T09:34:53
https://ai-or-human.com/
n00bi3s
ai-or-human.com
1970-01-01T00:00:00
0
{}
1o0a437
false
null
t3_1o0a437
/r/LocalLLaMA/comments/1o0a437/human_or_llm_guess_the_humanwritten_sentence/
false
false
default
16
null
I am beginner, need some guidance for my user case
2
I mostly use perplexity and google AI studio for text generation. While they're great at language and how they frame answers I am not getting what I want. Problems that I face: 1. Accuracy, cross confirmation: lying so confidently. I need something which can do cross confirmation. 2. Safety filters: Although I am not interested in explicit or super dangerous content, but it kills the thought process when we have to consistently think about framing prompt properly and it still somehow denies answering in some occasions. 3. Own database: I read some discussions here and other places( but never tried) that there are several ways to fine tune, rag, etc. But what I want is, I should have option to upload may be just 1 PDF as and when required and keep adding later. So I was thinking to start experimenting on cloud as I have 32gb ram and Nvidia 1660 🙈. I got to know that we can do this on runpod and vast.ai. I know that I might not get all the things I need from open-source, but whatever I can is good. Kindly help me with tutorials, guidance, starting point or a roadmap if possible. Thanks in advance
2025-10-07T09:12:09
https://www.reddit.com/r/LocalLLaMA/comments/1o09res/i_am_beginner_need_some_guidance_for_my_user_case/
KiranjotSingh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o09res
false
null
t3_1o09res
/r/LocalLLaMA/comments/1o09res/i_am_beginner_need_some_guidance_for_my_user_case/
false
false
self
2
null
Code2Video — generate educational videos via executable code
10
https://i.redd.it/yez52b86nntf1.gif [GitHub](https://github.com/showlab/code2video) Agentic, *code-centric* pipeline that turns a knowledge point into a clear Manim video—prioritizing structure, reproducibility, and teaching quality. **Tri-agent flow:** Planner → Coder → Critic; uses executable Manim to control timing/layout. * **Quick try:** `pip install -r requirements.txt`, add LLM/VLM keys; authors note best results with **Claude-4-Opus** (coding) + **Gemini 2.5** (layout).
2025-10-07T09:01:16
https://www.reddit.com/r/LocalLLaMA/comments/1o09ldg/code2video_generate_educational_videos_via/
freesysck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o09ldg
false
null
t3_1o09ldg
/r/LocalLLaMA/comments/1o09ldg/code2video_generate_educational_videos_via/
false
false
https://b.thumbs.redditm…UzYctr84X5gI.jpg
10
{'enabled': False, 'images': [{'id': 'UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?width=108&crop=smart&auto=webp&s=e90caa5cf3253762cb9b8ec939d05c355fbad45d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?width=216&crop=smart&auto=webp&s=6087be0fa71471c55629e43654bcf3250e86b460', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?width=320&crop=smart&auto=webp&s=befdfa7caf63ec97629112dd66e9191ab47f51a8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?width=640&crop=smart&auto=webp&s=2b2858b89f6b366fbb8e1feb65f0bff1a3739bb8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?width=960&crop=smart&auto=webp&s=52c2487e233facb4fc3dc4d73a6866b271c3cdee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?width=1080&crop=smart&auto=webp&s=eb333ed6db519f1031383175485aec0dbea9415d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UOWl8T4J9irFfHRdqJtGJYpkTYi2lEDU3BuG-j3RMng.png?auto=webp&s=330a0e4b45db7e6815efed6c61ace3e7bff6ec41', 'width': 1200}, 'variants': {}}]}
Best LLM router?
1
What’s everybody’s LLM router of choice? More employees are adopting AI use within the company and we’re looking to merge all the separate subscriptions into one, preferably with added features.
2025-10-07T07:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1o08kus/best_llm_router/
NoHumor0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o08kus
false
null
t3_1o08kus
/r/LocalLLaMA/comments/1o08kus/best_llm_router/
false
false
self
1
null
Can anyone help me set-up llama 4 for using it like meta AI
0
.
2025-10-07T07:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1o08knl/can_anyone_help_me_setup_llama_4_for_using_it/
Few_Mouse2140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o08knl
false
null
t3_1o08knl
/r/LocalLLaMA/comments/1o08knl/can_anyone_help_me_setup_llama_4_for_using_it/
false
false
self
0
null
Improved "time to first token" in LM Studio
38
I was benching some of my models on my M4 Max 128GB a few days ago, see the attached image. Today I noticed an update of the MLX runtime in LM Studio: MLX version info: - mlx-engine==6a8485b - mlx==0.29.1 - mlx-lm==0.28.1 - mlx-vlm==0.3.3 With this, "time to first token" has been improved dramatically. As an example: **Qwen3-Next:80b** ***4 bit MLX*** // 80k context window + 36k token prompt length Time to first token: 47 ➔ 46 seconds :| // 120k context window + 97k token prompt length Time to first token: 406 ➔ 178 seconds **Qwen3-Next:80b** ***6 bit MLX*** // 80k context window + 36k token prompt length Time to first token: 140 ➔ 48 seconds // 120k context window + 97k token prompt length Time to first token: 436 ➔ 190 seconds Can anyone confirm?
2025-10-07T07:51:05
https://i.redd.it/m2ttxrud9ntf1.png
waescher
i.redd.it
1970-01-01T00:00:00
0
{}
1o08igx
false
null
t3_1o08igx
/r/LocalLLaMA/comments/1o08igx/improved_time_to_first_token_in_lm_studio/
false
false
default
38
{'enabled': True, 'images': [{'id': 'm2ttxrud9ntf1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?width=108&crop=smart&auto=webp&s=e05c2b9c58d8d7e1889c8ea9fca9bf2e658d774d', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?width=216&crop=smart&auto=webp&s=4f2b61a43f2aaa2f21cf45549f2f8888074d78de', 'width': 216}, {'height': 84, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?width=320&crop=smart&auto=webp&s=65f6e493a1b64fd1ace0e1083585071f2413ff21', 'width': 320}, {'height': 168, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?width=640&crop=smart&auto=webp&s=ca5e7ac12bb3b1413f0d820c13cc0f0b9bd9d1b5', 'width': 640}, {'height': 253, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?width=960&crop=smart&auto=webp&s=04ddbf211ae308ad3b59eff6a233eb5ff4637946', 'width': 960}, {'height': 284, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?width=1080&crop=smart&auto=webp&s=d85c143fae7d01a814789232fe2cfd927995369a', 'width': 1080}], 'source': {'height': 717, 'url': 'https://preview.redd.it/m2ttxrud9ntf1.png?auto=webp&s=f4018760a99e953220cfc2ed196044a516f024fa', 'width': 2719}, 'variants': {}}]}
NVIDIA 5060Ti or AMD Radeon RX 9070 XT for running local LLMs?
7
I'm planning to set up a local machine for running LLMs and I'm debating between two GPUs: the **NVIDIA RTX 5060 Ti** and the **AMD Radeon RX 9070 XT**. My budget is tight, so the **RX 9070 XT would be the highest I can go**.
2025-10-07T07:38:11
https://www.reddit.com/r/LocalLLaMA/comments/1o08bcz/nvidia_5060ti_or_amd_radeon_rx_9070_xt_for/
Solid-Language-7106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o08bcz
false
null
t3_1o08bcz
/r/LocalLLaMA/comments/1o08bcz/nvidia_5060ti_or_amd_radeon_rx_9070_xt_for/
false
false
self
7
null
I have 1500€ of overtime compensation. I need to decide until the month which gpu(s) I want to buy with it. Which one(s) would you choose?
1
I have a mother ord that can only take two gpus. The one I currently have and game on is a 1070. I use a online gpu for working with cuda accelerated agent simulations as well as llms and flux image generation. I only have until the end of the month to spend the money. Which gpus should I get?
2025-10-07T07:27:34
https://www.reddit.com/r/LocalLLaMA/comments/1o085lk/i_have_1500_of_overtime_compensation_i_need_to/
Master-Eva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o085lk
false
null
t3_1o085lk
/r/LocalLLaMA/comments/1o085lk/i_have_1500_of_overtime_compensation_i_need_to/
false
false
self
1
null
[PoC] LatentRecall — an experiment in LLM memory that doesn’t store prompts, but computes them on the fly
1
A week ago I shared an idea called Reconstructive Episodic Memory (REM) — treating memory not as storage but as computation. Now I’ve built a small proof-of-concept to see if it could work in practice. 💡 The idea is simple: Normally, a system prompt exists explicitly — as text or token indices — and can be read or extracted. But what if we tried a different approach? write the prompt once, then never store it as text or vector again; let the model “forget” it and keep only a trace in parameter space; when the right key arrives, reconstruct it on the fly inside the computation. In this setup, memory exists only as potential — it does not appear as text or tokens until a query arrives. Between model runs, the prompt does not exist at all: it materializes for milliseconds when reconstructed and passed forward. The PoC was implemented directly against the LLaMA tokenizer to ensure the reconstructed sequence is usable by a real model. 📊 What we explored: deterministic, token-exact reconstruction of a system prompt; narrow attractor basin (~1–2 %) and sensitivity to noise; without the correct key, the prompt never appears in explicit form and cannot be retrieved. 💾 Code, data, and PDF: https://zenodo.org/records/17281794 🧩 This isn’t a finished technology — just an exploratory experiment and an invitation to think. Maybe LLM memory in the future doesn’t have to be something that’s stored at all, but something that comes into being only when it’s needed.
2025-10-07T07:21:32
https://www.reddit.com/r/LocalLLaMA/comments/1o082bj/poc_latentrecall_an_experiment_in_llm_memory_that/
Purple-Bathroom-3326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o082bj
false
null
t3_1o082bj
/r/LocalLLaMA/comments/1o082bj/poc_latentrecall_an_experiment_in_llm_memory_that/
false
false
self
1
null
mem0 vs supermemory: what's better for adding memory to your llms?
0
if you've ever tried adding memory to your LLMs, both mem0 and supermemory are quite popular. we tested Mem0’s SOTA latency claims for adding memory to your agents and compared it with supermemory: our ai memory layer.  [provider 1: supermemory](https://preview.redd.it/e51flp7y3ntf1.png?width=1080&format=png&auto=webp&s=76ba5d6bd31b5e576a505a5c793b34ce9f97182d) Mean Improvement: 37.4% Median Improvement: 41.4% P95 Improvement: 22.9% P99 Improvement: 43.0% Stability Gain: 39.5% Max Value: 60% Used the LoCoMo dataset. mem0 just blatantly lies in their research papers. Scira AI and a bunch of other enterprises [switched to supermemory](https://supermemory.ai/blog/why-scira-ai-switched/) because of how bad mem0 was. And, we just raised $3M to keep building the best memory layer;) disclaimer: im the devrel guy at supermemory
2025-10-07T07:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1o07z2x/mem0_vs_supermemory_whats_better_for_adding/
writer_coder_06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o07z2x
false
null
t3_1o07z2x
/r/LocalLLaMA/comments/1o07z2x/mem0_vs_supermemory_whats_better_for_adding/
false
false
https://external-preview…a64f5f4b88370c39
0
{'enabled': False, 'images': [{'id': 'eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?width=108&crop=smart&auto=webp&s=d39538a97afe7b507a195d4bfdf71cfda14eb424', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?width=216&crop=smart&auto=webp&s=cbb886dbda3a0dd9393378a3b0347d8eb0df9ce1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?width=320&crop=smart&auto=webp&s=972b7aebb5c336fb3fbd45ff0a44f5f6adcc8e8d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?width=640&crop=smart&auto=webp&s=e8dd62c41ceb9873f9cbf5ffa45277dd7cfc5c21', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?width=960&crop=smart&auto=webp&s=dd6b4892351efc7fb1c2d364754a2931440fba48', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?width=1080&crop=smart&auto=webp&s=55037966f7ced1e5e0edd9dad4505cc1de98abda', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/eFZ5kho8Rreu00E1QrYeYIc6P-ltCmnTHYoJkOwI60g.png?auto=webp&s=07b939226a12963c6b50da48be7165ad800dc5e1', 'width': 2000}, 'variants': {}}]}
Minisforum MS-S1 MAX - Running Local LLMs
1
[removed]
2025-10-07T06:58:11
https://www.reddit.com/r/LocalLLaMA/comments/1o07oue/minisforum_mss1_max_running_local_llms/
jozews321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o07oue
false
null
t3_1o07oue
/r/LocalLLaMA/comments/1o07oue/minisforum_mss1_max_running_local_llms/
false
false
https://b.thumbs.redditm…QwxxAIfw5FVg.jpg
1
{'enabled': False, 'images': [{'id': '6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?width=108&crop=smart&auto=webp&s=065295b5662239d400662f1180943cc68b9493ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?width=216&crop=smart&auto=webp&s=07109170bde7e7340d4229792facabd1210ed74e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?width=320&crop=smart&auto=webp&s=0c769d74cc215aaa715f9d43b5c996768fb781d5', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?auto=webp&s=adc9ee750c886069ea7a6bf4248c5df3fe211db9', 'width': 460}, 'variants': {}}]}
4-bit model used more RAM than bf16 in HF transformers
1
It was very weird that I tried to finetune Qwen3-VL-30B-A3B on Google Colab, and found that using 4-bit version actually raised VRAM OOM error, and if I use bf16 it runs fine. bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False, bnb_4bit_quant_type='nf4', ) # processor = AutoProcessor.from_pretrained("google/gemma-3-270m") model = AutoModelForImageTextToText.from_pretrained( "Qwen/Qwen3-VL-30B-A3B-Instruct", torch_dtype=torch.bfloat16, device_map="auto"#, quantization_config=bnb_config ) If I added quantization\_config=bnb\_config it will have OOM
2025-10-07T06:50:18
https://www.reddit.com/r/LocalLLaMA/comments/1o07kbr/4bit_model_used_more_ram_than_bf16_in_hf/
Striking-Warning9533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o07kbr
false
null
t3_1o07kbr
/r/LocalLLaMA/comments/1o07kbr/4bit_model_used_more_ram_than_bf16_in_hf/
false
false
self
1
null
Running LLMs locally with Docker Model Runner - here's my complete setup guide
5
I finally moved everything local using Docker Model Runner. Thought I'd share what I learned. Key benefits I found: \- Full data privacy (no data leaves my machine) \- Can run multiple models simultaneously \- Works with both Docker Hub and Hugging Face models \- OpenAI-compatible API endpoints Setup was surprisingly easy - took about 10 minutes.
2025-10-07T05:33:22
https://youtu.be/CV5uBoA78qI
OrewaDeveloper
youtu.be
1970-01-01T00:00:00
0
{}
1o06bk5
false
{'oembed': {'author_name': 'FluxStack', 'author_url': 'https://www.youtube.com/@fluxstack', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/CV5uBoA78qI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Docker Model Runner Tutorial 2025: Run AI Models Locally in Minutes | Complete Guide"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/CV5uBoA78qI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Docker Model Runner Tutorial 2025: Run AI Models Locally in Minutes | Complete Guide', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1o06bk5
/r/LocalLLaMA/comments/1o06bk5/running_llms_locally_with_docker_model_runner/
false
false
default
5
{'enabled': False, 'images': [{'id': 'Inwwh56unGIyb2l341wHJsmP3atI0i-y38ylZwXops4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Inwwh56unGIyb2l341wHJsmP3atI0i-y38ylZwXops4.jpeg?width=108&crop=smart&auto=webp&s=3d9e043861c8705b97c42a9a677ee555a22d7752', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Inwwh56unGIyb2l341wHJsmP3atI0i-y38ylZwXops4.jpeg?width=216&crop=smart&auto=webp&s=55d0a428c5bf1d7b1593df3184c237eeee7427dc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Inwwh56unGIyb2l341wHJsmP3atI0i-y38ylZwXops4.jpeg?width=320&crop=smart&auto=webp&s=b41947e4a1e6aba6ef1964490ab4f238727e2acd', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Inwwh56unGIyb2l341wHJsmP3atI0i-y38ylZwXops4.jpeg?auto=webp&s=b610bf6a6c4bd79328da598255a64f3f77b05fb4', 'width': 480}, 'variants': {}}]}
Can you recommend a course for my youngster?
30
I have a 13-year-old whose school rules do not allow kids to pass off AI work as their own, which I generally support. Whether my kids starts using AI now or later, I know it's going to be ubiquitous tech throughout my kid's formative years, so I am thinking of a positive way my family can dispell some of the mystique, learn about it, and take advantage of the tech while keeping our eyes out for potential dangers. I feel my kid should know a little about what an LLm is comprised of and how it works. To that end, I am looking for an online course on how to build and train your own LLM from scratch, would be appropriate for tech savvy kids, requires little to no programming skills (or just basic programming skills that can be learned along the way), and whose goals would be to teach the "basics" of how an LLM works by having the student follow along and build/train their own with ollama or whatever. While I am not a complete novice when it comes to LLMs, I have never built/trained my own models. For my kid's setup, we could use a Lenovo gaming laptop i9, 32 gb ram, Nvidia geforce rtx4070, 8 gb vram. Not good for big models but maybe enough for the basics (?). I suppose we could just buy the compute power, but I think having a local model residing on our own machine would be cooler and provide some good learning opportunities. Heck, I might even join my kid in the course. Any suggestions for an online course (free or paid)?
2025-10-07T05:23:21
https://www.reddit.com/r/LocalLLaMA/comments/1o065kb/can_you_recommend_a_course_for_my_youngster/
pleok
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o065kb
false
null
t3_1o065kb
/r/LocalLLaMA/comments/1o065kb/can_you_recommend_a_course_for_my_youngster/
false
false
self
30
null
AudioBook Maker with Ebook Editor Using Chatterbox TTS
21
Desktop application to create Full Audiobooks from ebook(epub/text) , chapterwise audio for the ebook etc using chatterbox tts and Easy Ebook Editor to Edit ebooks, export chapters from it, import chapters, create new ebook, edit metadata etc Other options are- **Direct Local TTS** **Remote API Support with tts-webui (**[**https://github.com/rsxdalv/TTS-WebUI**](https://github.com/rsxdalv/TTS-WebUI)**)** **Multiple Input Formats** \- TXT, PDF, EPUB support **Voice Management** \- Easy voice reference handling **Advanced Settings** \- Full control over TTS parameters **Preset System** \- Save and load your favorite settings **Audio Player** \- Preview generated audio instantly Github link - [https://github.com/D3voz/audiobook-maker-pro](https://github.com/D3voz/audiobook-maker-pro) Full 33 min long one chapter sample from final empire - [https://screenapp.io/app/#/shared/JQh3r66YZw](https://screenapp.io/app/#/shared/JQh3r66YZw) # Performance Comparison (NVIDIA 4060 Ti): \-Local Mode Speed: \~37 iterations/sec \-API Mode Speed(using tts-webui) : \~80+ iterations/sec (over 2x faster)
2025-10-07T05:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1o065ig/audiobook_maker_with_ebook_editor_using/
Devajyoti1231
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o065ig
false
null
t3_1o065ig
/r/LocalLLaMA/comments/1o065ig/audiobook_maker_with_ebook_editor_using/
false
false
self
21
{'enabled': False, 'images': [{'id': 'mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?width=108&crop=smart&auto=webp&s=774b2c608b999ba00f0d2cc9e8e64e5fd68e0e28', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?width=216&crop=smart&auto=webp&s=ab430e2ee2b91787a1ca13f2eaad7c525a4ed7d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?width=320&crop=smart&auto=webp&s=4f03496d6ae99c7718e0aa9be78227dee9ee1533', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?width=640&crop=smart&auto=webp&s=733516487b890af9e85bb70bc5803c3c63910281', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?width=960&crop=smart&auto=webp&s=1351c2685c48b5b596d398a4d65427eeef126e0d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?width=1080&crop=smart&auto=webp&s=eabd4d81b0634d8b11111a4883a5dac2d8ee7739', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/mEx5fFibU2oqxB5QKLGH062WOmrjBRm-mO_3o59iZ2k.png?auto=webp&s=93e67a963d1672a6a908a267ad462224086a279f', 'width': 1280}, 'variants': {}}]}
2 things we never forget, our first GPU and when your first GPU dies
54
Just had a 3090 die, maybe I will resurrect it, maybe not. It comes with the territory of buying used GPUs from miners.
2025-10-07T04:07:24
https://www.reddit.com/r/LocalLLaMA/comments/1o04s7y/2_things_we_never_forget_our_first_gpu_and_when/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o04s7y
false
null
t3_1o04s7y
/r/LocalLLaMA/comments/1o04s7y/2_things_we_never_forget_our_first_gpu_and_when/
false
false
self
54
null
Buildautomata Memory MCP server
1
[removed]
2025-10-07T04:03:23
https://www.reddit.com/r/LocalLLaMA/comments/1o04pkx/buildautomata_memory_mcp_server/
freedom2adventure
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o04pkx
false
null
t3_1o04pkx
/r/LocalLLaMA/comments/1o04pkx/buildautomata_memory_mcp_server/
false
false
self
1
null
GLM 4.6 reduntant reading of files
4
hi, i am trying to use GLM 4.6 with codex cli, until my weekly limit for my openai key resets. i am getting alot of redundant tool calls: \`\`\` \> Now I need to add the import and the system to the plugin. Let me apply the patch: • Explored └ Read computation\_graph.rs • Explored └ Search use crate::systems::reactive\_computation in computation\_graph.rs \> Let me check the exact line numbers: • Explored └ Read computation\_graph.rs • Explored └ Read computation\_graph.rs • Explored └ Search preview\_visibility\_system in computation\_graph.rs • Explored └ Read computation\_graph.rs • Ran cat -n crates/bevy\_core/src/plugins/computation\_graph.rs └ 1 use crate::nodes::addition\_node::AdditionNode as TraitAdditionNode; 2 use crate::nodes::construct\_xyz::ConstructXYZNode; … +7 lines 514 info!("✅ Registered {} source nodes", 3); 515 } \`\`\`
2025-10-07T03:19:12
https://www.reddit.com/r/LocalLLaMA/comments/1o03v8y/glm_46_reduntant_reading_of_files/
No-Television-4805
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o03v8y
false
null
t3_1o03v8y
/r/LocalLLaMA/comments/1o03v8y/glm_46_reduntant_reading_of_files/
false
false
self
4
null
SDQ-LLM: Sigma-Delta Quantization for 1-bit LLMs of any size
19
Abstract >Large language models (LLMs) face significant computational and memory challenges, making extremely low-bit quantization crucial for their efficient deployment. In this work, we introduce SDQ-LLM: Sigma-Delta Quantization for 1-bit LLMs of any size, a novel framework that enables extremely low-bit quantization of LLMs while preserving their linguistic reasoning capabilities. A distinctive feature of SDQ-LLM is the continuous adjustability of the Over-Sampling Ratio (OSR), enabling dynamic adaptation to memory or VRAM constraints by selecting fractional OSR (e.g. 2.5 times) for an optimal trade-off between model size and accuracy. SDQ-LLM uses upsampling combined with Sigma-Delta Quantizer to binarize or ternarize LLMs weights, encoding high-precision parameters into 1-bit or 1.58-bit representations, replacing the multiplication operations within linear layers with addition. This approach significantly enhances inference efficiency under extremely low-bit quantization. To further reduce the loss of quantization precision, we incorporate Hadamard-based weight smoothing prior to quantization, improving the stability and robustness of the weight representations. Furthermore, to fully leverage the continuity of the OSR and reduce precision loss, recognizing the correlation between quantization sensitivity and weight variance, we propose a fine-grained, layer- and linear-wise OSR allocation strategy, MultiOSR. This strategy distributes OSR both across layers and within each layer, based on weight variance and parameter scale. Finally, extensive experiments on OPT and LLaMA model families demonstrate that SDQ-LLM achieves a more efficient and high-precision performance even under highly aggressive low-OSR settings. Our code is available at [https://github.com/Dreamlittlecat/LLM-Quant-Factory](https://github.com/Dreamlittlecat/LLM-Quant-Factory). Code: [https://github.com/Dreamlittlecat/LLM-Quant-Factory](https://github.com/Dreamlittlecat/LLM-Quant-Factory)
2025-10-07T03:14:57
https://arxiv.org/abs/2510.03275
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1o03s6a
false
null
t3_1o03s6a
/r/LocalLLaMA/comments/1o03s6a/sdqllm_sigmadelta_quantization_for_1bit_llms_of/
false
false
default
19
null
What’s the hardware config to mimic Gemini 2.5 flash lite ?
1
Been using Gemini 2.5 flash lite with good result I want to know if I wanna run it locally LLM What are the hardware config I need to run similar performance and like maybe 1/5 of its generation speed ? 1/10 also fine
2025-10-07T02:28:01
https://www.reddit.com/r/LocalLLaMA/comments/1o02txe/whats_the_hardware_config_to_mimic_gemini_25/
Sea-Commission5383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o02txe
false
null
t3_1o02txe
/r/LocalLLaMA/comments/1o02txe/whats_the_hardware_config_to_mimic_gemini_25/
false
false
self
1
null
Hardware question for Dell poweredge r720xd.
3
If this is the wrong spot for hardware questions just point me somewhere else? I currently run i9-9980xe on x299 mainboard with 128gb quad channel ddr4 2400 (3090 gpu). On a 70b without a huge context, I get about 1 to 3 tk/sec. I have a friend offer me a Dell poweredge r720xd. Dual xeon, 128gb ddr3 I think. Would the server be any better than what I have? Maybe just save my $ for a threadripper PRO?
2025-10-07T02:24:50
https://www.reddit.com/r/LocalLLaMA/comments/1o02rkk/hardware_question_for_dell_poweredge_r720xd/
Lower_Bedroom_2748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o02rkk
false
null
t3_1o02rkk
/r/LocalLLaMA/comments/1o02rkk/hardware_question_for_dell_poweredge_r720xd/
false
false
self
3
null
LLM question
4
Are there any models that are singularly focused on individual coding tasks? Like for example python only or flutter etc? I’m extremely lucky that I was able to build my memory system with only help from ChatGPT and Claude in VS Code. I’m not very good at coding myself. I’m good at the overall design of something. Like knowing how I want something to work, but due to having severe ADHD, and having had 4 strokes, my memory doesn’t really work all that well anymore for learning how to code something. So if anyone can direct me to a model that excels at coding in the 30B to 70B area or is explicitly for coding that would be a great help
2025-10-07T02:04:55
https://www.reddit.com/r/LocalLLaMA/comments/1o02ci4/llm_question/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o02ci4
false
null
t3_1o02ci4
/r/LocalLLaMA/comments/1o02ci4/llm_question/
false
false
self
4
null
GLM-4.6 isn’t just an update
1
[removed]
2025-10-07T02:04:32
https://www.reddit.com/r/LocalLLaMA/comments/1o02c7j/glm46_isnt_just_an_update/
Impressive-Olive8372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o02c7j
false
null
t3_1o02c7j
/r/LocalLLaMA/comments/1o02c7j/glm46_isnt_just_an_update/
false
false
self
1
null
What and when 7900xtx is boosted?
11
I don't remember any model going over 70 tok/sec but after 5-6 months I just tested it with gpt-oss-20b and I get 168 tok/sec. Do you know what improved 7900xtx? 168.13 tok/sec • 1151 tokens • 0.21s to first token • Stop reason: EOS Token Found
2025-10-07T01:42:01
https://www.reddit.com/r/LocalLLaMA/comments/1o01una/what_and_when_7900xtx_is_boosted/
tutami
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o01una
false
null
t3_1o01una
/r/LocalLLaMA/comments/1o01una/what_and_when_7900xtx_is_boosted/
false
false
self
11
null
Claude why
0
https://preview.redd.it/…s creators lmao.
2025-10-07T01:31:30
https://www.reddit.com/r/LocalLLaMA/comments/1o01mfa/claude_why/
Melodic-Emphasis-707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o01mfa
false
null
t3_1o01mfa
/r/LocalLLaMA/comments/1o01mfa/claude_why/
false
false
https://a.thumbs.redditm…8h2nnz3Xld_4.jpg
0
null
What is the smallest reasoning model you fine tuned and what do you use it for?
8
Wondering how much this sun was able to make use of small models like qwen 0.6 b and Gemma 270. Have you been able to get it working for anything useful? What was your experience fine tuning.
2025-10-07T01:09:35
https://www.reddit.com/r/LocalLLaMA/comments/1o015au/what_is_the_smallest_reasoning_model_you_fine/
SnooMarzipans2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o015au
false
null
t3_1o015au
/r/LocalLLaMA/comments/1o015au/what_is_the_smallest_reasoning_model_you_fine/
false
false
self
8
null
Last week in Multimodal AI - Local Edition
22
I curate a weekly newsletter on multimodal AI, here are the local/edge highlights from today's edition: **ModernVBERT - 250M beats 2.5B models** * 7x faster CPU inference * Bidirectional attention beats causal by +10.6 nDCG@5 * Runs on devices that can't load traditional models * [Paper](https://arxiv.org/pdf/2510.01149) | [HuggingFace](https://huggingface.co/ModernVBERT) | [Colab](https://colab.research.google.com/drive/1bT5LWeO1gPL83GKUZsFeFEleHmEDEQRy) https://preview.redd.it/r15po9xz3ltf1.png?width=1170&format=png&auto=webp&s=729ce13d7c40e57130be324b03b66d1a978b31d7 **Qwen3-VL - GPT-5 performance at 3B active params** * Matches GPT-5-Mini and Claude4-Sonnet * Handles STEM, VQA, OCR, video, agents * FP8 quantized version available * [GitHub](https://github.com/QwenLM/Qwen3-VL/tree/main/cookbooks) | [HuggingFace](https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe) **DocPruner - Cut storage by 60%** * <1% performance drop * Adaptive pruning per document * Makes multi-vector retrieval affordable * [Paper](https://arxiv.org/abs/2509.23883) [The illustration of comparison between OCR-based \(a\) & LVLM-based \(b\) paradigms for VDR, and DocPruner \(c\), a novel framework to adaptively prune the patch-level embeddings for diverse document types.](https://preview.redd.it/05ix4vj34ltf1.png?width=1456&format=png&auto=webp&s=d993b6e742f4fd49b72dfdb3c37e3c0fe19a21ba) **Fathom-DeepResearch - 4B SOTA web investigation** * Two specialized 4B models * DuetQA dataset + RAPO optimization * [Paper](https://arxiv.org/abs/2509.24107) | [GitHub](https://github.com/FractalAIResearchLabs/Fathom-DeepResearch) Other highlights: * Claude Sonnet 4.5 codes for 30+ hours straight * Ovi generates synchronized audio-video * CU-1 achieves 67.5% GUI click accuracy Full newsletter(demos,papers,more):
2025-10-07T00:31:27
https://www.reddit.com/r/LocalLLaMA/comments/1o00bnb/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o00bnb
false
null
t3_1o00bnb
/r/LocalLLaMA/comments/1o00bnb/last_week_in_multimodal_ai_local_edition/
false
false
https://b.thumbs.redditm…GsGuMrSb94Ug.jpg
22
null
Open Source Alternative to Perplexity
116
For those of you who aren't familiar with SurfSense, it aims to be the **open-source alternative to NotebookLM, Perplexity, or Glean.** In short, it's a Highly Customizable AI Research Agent that connects to your personal external sources and Search Engines (Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluence, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar and more to come. I'm looking for contributors to help shape the future of SurfSense! If you're interested in AI agents, RAG, browser extensions, or building open-source research tools, this is a great place to jump in. Here’s a quick look at what SurfSense offers right now: **Features** * Supports 100+ LLMs * Supports local Ollama or vLLM setups * 6000+ Embedding Models * 50+ File extensions supported (Added Docling recently) * Podcasts support with local TTS providers (Kokoro TTS) * Connects with 15+ external sources such as Search Engines, Slack, Notion, Gmail, Notion, Confluence etc * Cross-Browser Extension to let you save any dynamic webpage you want, including authenticated content. **Upcoming Planned Features** * Mergeable MindMaps. * Note Management * Multi Collaborative Notebooks. **Interested in contributing?** SurfSense is completely open source, with an active roadmap. Whether you want to pick up an existing feature, suggest something new, fix bugs, or help improve docs, you're welcome to join in. GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense)
2025-10-07T00:31:00
https://www.reddit.com/r/LocalLLaMA/comments/1o00ban/open_source_alternative_to_perplexity/
Uiqueblhats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o00ban
false
null
t3_1o00ban
/r/LocalLLaMA/comments/1o00ban/open_source_alternative_to_perplexity/
false
false
self
116
{'enabled': False, 'images': [{'id': '2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?width=108&crop=smart&auto=webp&s=9df7969aa83ac4d6aa58eafbd2991d358afbce3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?width=216&crop=smart&auto=webp&s=a2a0a6a6e661929d9ae9550f7b3531d6bb6e9e3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?width=320&crop=smart&auto=webp&s=5547f174c4ef700b38cedadcd9ed1ea161334edf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?width=640&crop=smart&auto=webp&s=2a307d7948e97629db1d38b477112873c508f22e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?width=960&crop=smart&auto=webp&s=2207a7893d9c4dcc41f7dd8064a428dc95500f25', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?width=1080&crop=smart&auto=webp&s=44fe1eb84838d68835132541caddc620075fb578', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2X8pg9VAL5bv7YvlyquscMAZjIQarwZg7b6gawaVmzM.png?auto=webp&s=f26400535d66fecbc9bd99ffac8e99d0447b00ab', 'width': 1200}, 'variants': {}}]}
Granite 4 (gguf) is useless if you try to use the full 128k context.
41
**TLDR:** its useless with long context from my test with multiple model, and configuration. Both MLX and GUFF --- I had a special task, required 156k token, decided to try it. I have a game guide i made with AI, i know its full of error(i'm slowly correcting them as i spot them), so i gave the guide, with the full wiki of said game, and ask the model to find mistake. The website contain wrong information. Find them by comparing the information to the official wiki. Report all of them. <website> ... </website> <game wiki> ... </game wiki> With LmStudio, All runtime updated. M2 Max 64GB. --- I tried Granite 4.0 H Small 8Bit MLX at first (had to trim some data, MLX only support about 131k context for some reason?). The response was a barely coherent new guide covering one of the subject of the game. granite-4.0-h-small-mlx (23.24 tok/sec, 781 tokens, 607.44s to first token, Stop reason: User Stopped) Introduction In this guide, we'll discuss the various methods for generating income in the game RimWorld. By understanding these strategies and optimizing your colony's operations, you'll be able to build a thriving settlement capable of surviving any challenge thrown its way. Farming One of the primary methods for generating income in RimWorld is through farming. Planting crops and harvesting them can provide a steady stream of resources, which in turn can be sold or processed into various products. --- I never had any luck with any MLX, so i gave the GUFF a try. Now with : Granite 4.0 H Small GGUF Q4_K_M(Q8 with that much context wont fit my 64gb) (let me push to 164K context keeping all my data) After about 5 minuet of prompt processing, it crashed : lmstudio/granite-4.0-h-small@q4_k_m Processing Prompt... 85% This message contains no content. The AI has nothing to say. --- Lets try tiny, its not like it need knowledge, i give it all it need. Again, ignore, instruction, write a small guide on part of the content. ibm/granite-4-h-tiny Q8 23.88 tok/sec•822 tokens•221.59s to first token•Stop reason: EOS Token Found Ideology Min-Maxing Here are some tips and suggestions for optimizing your ideoligion choices for the best results: Fluid vs Static Ideology: If you don't need specific memes immediately, choose fluid to adapt to various scenarios in a single game. Choose static if you have a clear idea of what you want from the start. --- Since not all Guff are created equal, lets try the latest sensation everybody toot : unsloth, ill even follow their guide : https://docs.unsloth.ai/new/ibm-granite-4.0 ill even trim my context to 114k , unsloth guide say its max 131k, even though Ibm said no limit, i guess GUFF have limitation? unsloth/granite-4.0-h-tiny-GGUF(26.68 tok/sec•794 tokens•152.78s to first token•Stop reason: EOS Token Found granite-4.0-h-tiny Food crops: Corn: Product = Silver 163.5 per hour work; Profit/day plant growth = Silver 1.16. Haygrass: Product = Silver 73.0 per hour work; Profit/day plant growth = Silver 0.83. Rice: Product = Silver 44.6 per hour work; Profit/day plant growth = Silver 1.19. Potatoes: Product = Silver 81.77 per hour work; Profit/day plant growth = Silver 1.12. Strawberries: Product = Silver 101.35 per hour work; Profit/day plant growth = Silver 1.02. Again, its just write a small guide on part of the data... --- Lets try the unsloth small version with recommended setting, we never know, i might have screw up setting. unsloth : Granite 4.0 H Small GGUF Q4_K_M granite-4.0-h-small Processing Prompt... 81% This message contains no content. The AI has nothing to say. Crash while processing the prompt, while under the 131k limit.
2025-10-07T00:09:43
https://www.reddit.com/r/LocalLLaMA/comments/1nzzurf/granite_4_gguf_is_useless_if_you_try_to_use_the/
mantafloppy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nzzurf
false
null
t3_1nzzurf
/r/LocalLLaMA/comments/1nzzurf/granite_4_gguf_is_useless_if_you_try_to_use_the/
false
false
self
41
{'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]}
Recommendation for a better local model with less "safety" restrictions
10
I've been using GPT OSS 120b for a while a noticed that it can consult OpenAI policies up to three times during thinking. This feels rather frustrating, I was mostly asking some philosophical questions and analyze some text from various books. It was consistently trying to avoid any kind of opinion and hate speech (I have no idea what this even is). As a result its responses are rather disappointing, it feels handicapped when working with other peoples texts and thoughts. I'm looking for a more transparent, less restricted model that can run on a single RTX PRO 6000 and is good at reading text "as-is". Definitely less biased compared to OpenAI's creation. What would you recommend?
2025-10-06T23:31:00
https://www.reddit.com/r/LocalLLaMA/comments/1nzz007/recommendation_for_a_better_local_model_with_less/
Away-Lecture-3172
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nzz007
false
null
t3_1nzz007
/r/LocalLLaMA/comments/1nzz007/recommendation_for_a_better_local_model_with_less/
false
false
self
10
null
Help! RX 580 GPU Not Detected in Ollama/LM Studio/Jan.ai for Local LLMs – What's Wrong ?
1
Hey r/LocalLLaMA, I'm at my wit's end trying to get GPU acceleration working on my AMD RX 580 (8GB VRAM, Polaris gfx803) for running small models like Phi-3-mini or Gemma-2B. CPU mode works (slow AF), but I want that sweet Vulkan/ROCm offload. Specs: Windows 11, latest Adrenalin drivers (24.9.1, factory reset done), no iGPU conflict (disabled if any). Here's what I've tried – nothing detects the GPU: 1. **Ollama**: Installed AMD preview, set HSA\_OVERRIDE\_GFX\_VERSION=8.0.3 env var. Runs CPU-only; logs say "no compatible amdgpu devices." Tried community fork (likelovewant/ollama-for-amd v0.9.0) – same issue. 2. **LM Studio**: Downloaded common version, enabled ROCm extension in Developer Mode. Hacked backend-manifest.json to add "gfx803" (via PowerShell script for DLL swaps from Ollama zip). Replaced ggml-hip.dll/rocblas.dll/llama.dll in extensions/backends/bin. Env var set. Still "No compatible GPUs" in Hardware tab. Vulkan loader? Zilch. 3. **Jan.ai**: Fresh install, set Vulkan engine in Settings. Dashboard shows "No devices found" under GPUs. Console errors? Vulkan init fails with "ErrorInitializationFailed" or similar (F12 dev tools). Tried Admin mode/disable fullscreen – no dice. Tried: Clean driver reinstall (DDU wipe). Tiny Q4\_K\_M GGUF models only (fits VRAM). Task Manager/AMD Software shows GPU active for games, but zero % during inference. WSL2 + old ROCm 4.5? Too fiddly, gave up. Is RX 580 just too old for 2025 Vulkan in these tools (llama.cpp backend)? Community hacks for Polaris? Direct llama.cpp Vulkan compile? Or am I missing a dumb toggle? Budget's tight – no upgrade yet, but wanna run local chat/code gen without melting my CPU.
2025-10-06T22:57:51
https://www.reddit.com/r/LocalLLaMA/comments/1nzy8cr/help_rx_580_gpu_not_detected_in_ollamalm/
Master_Wrongdoer8908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nzy8cr
false
null
t3_1nzy8cr
/r/LocalLLaMA/comments/1nzy8cr/help_rx_580_gpu_not_detected_in_ollamalm/
false
false
self
1
null
Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning
9
*Fine-tuning pre-trained large language models (LLMs) for down-stream tasks is a critical step in the AI deployment pipeline. Reinforcement learning (RL) is arguably the most prominent fine-tuning method, contributing to the birth of many state-of-the-art LLMs. In contrast, evolution strategies (ES), which once showed comparable performance to RL on models with a few million parameters, was neglected due to the pessimistic perception of its scalability to larger models. In this work, we report the first successful attempt to scale up ES for fine-tuning the full parameters of LLMs, showing the surprising fact that ES can search efficiently over billions of parameters and outperform existing RL fine-tuning methods in multiple respects, including sample efficiency, tolerance to long-horizon rewards, robustness to different base LLMs, less tendency to reward hacking, and more stable performance across runs. It therefore serves as a basis to unlock a new direction in LLM fine-tuning beyond what current RL techniques provide. The source codes are provided at: this https URL* https://github.com/VsonicV/es-fine-tuning-paper
2025-10-06T22:35:12
https://arxiv.org/abs/2509.24372
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1nzxp6a
false
null
t3_1nzxp6a
/r/LocalLLaMA/comments/1nzxp6a/evolution_strategies_at_scale_llm_finetuning/
false
false
default
9
null
GLM 4.6 is the top new open weight model on Design Arena
63
GLM 4.6 is outperforming the new Kimi models and both DeepSeek 3.2 and 3.2-exp in the seven day overall category on design arena. It's also beating every non-Anthropic SOTA model. I saw a post a few days ago showing it also took the top position on lmarena (https://www.reddit.com/r/LocalLLaMA/comments/1nxbbxe/glm\_46\_new\_best\_open\_weight\_overall\_on\_lmarena/) and it looks like it's doing the same for visual reasoning. This model is insane https://preview.redd.it/hzp0gpp8ektf1.png?width=1883&format=png&auto=webp&s=4a7c84277e40c130e803a7bb5c6c7d8a2674f6a1
2025-10-06T22:06:33
https://www.reddit.com/r/LocalLLaMA/comments/1nzx061/glm_46_is_the_top_new_open_weight_model_on_design/
No-Tackle-5388
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nzx061
false
null
t3_1nzx061
/r/LocalLLaMA/comments/1nzx061/glm_46_is_the_top_new_open_weight_model_on_design/
false
false
https://b.thumbs.redditm…cszulgIHDZZI.jpg
63
null
The qwen3-next pr in llamacpp has been validated with a small test model
306
Link to comment: [https://github.com/ggml-org/llama.cpp/pull/16095#issuecomment-3373977382](https://github.com/ggml-org/llama.cpp/pull/16095#issuecomment-3373977382) I've been stalking this pr since it was opened and figured I'd share this update since I know a lot of others were interested in this model. Pwilkin has done some crazy work getting this together so quickly.
2025-10-06T21:52:11
https://www.reddit.com/gallery/1nzwnbj
Betadoggo_
reddit.com
1970-01-01T00:00:00
0
{}
1nzwnbj
false
null
t3_1nzwnbj
/r/LocalLLaMA/comments/1nzwnbj/the_qwen3next_pr_in_llamacpp_has_been_validated/
false
false
https://b.thumbs.redditm…mlQDKP6zYAUY.jpg
306
null
eGPU + Linux = ???
4
Guys, I have been thinking about buying a new GPU and use it with my laptop to run LLMs. Sounds good, but as i dig into the forums, i see people addressing many problems with this kind of setup: 1. it works well only for inference, when the model fits 100% into the VRAM. 2. Linux might be problematic to make it work So I would like to ask people's experience/opinion here that has similar setup Thanks.
2025-10-06T21:31:33
https://www.reddit.com/r/LocalLLaMA/comments/1nzw3xt/egpu_linux/
Puzzleheaded_Dark_80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nzw3xt
false
null
t3_1nzw3xt
/r/LocalLLaMA/comments/1nzw3xt/egpu_linux/
false
false
self
4
null