title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
17 Tokens/s fully offloaded to CPU Qwen3-Coder-Next
1
I was just testing the speed of Qwen3-Coder-Next and decided to try offloading everything to my CPU. To my surprise, I was getting 17 tokens/s. I’m confused about why I’m only getting 2 tokens/s less than with the GPU, and I’m wondering if anyone else is having issues with CUDA. I also tried Vulkan, but I ended up getting around 10 tokens/s. I’m running the default Q4\_K\_M quant, and here’s my command: `llama-server -m C:\llama\models\Qwen3-Coder-Next-Q4_K_M.gguf --temp 1.0 --top-k 40 --top-p 0.95 --min-p 0.01 -c 60000 -fa on --port 8083 --jinja --cache-ram 60000 --fit on` Specs: RTX 3060 12GB, i5-14600KF, 64GB RAM @ 3200 MHz. also tried getting this to work to speed up my opencode: `--spec-type ngram-mod --spec-ngram-size-n 24 --draft-min 48 --draft-max 64` but get an invalid input error.
2026-02-06T17:48:44
https://www.reddit.com/r/LocalLLaMA/comments/1qxojf7/17_tokenss_fully_offloaded_to_cpu_qwen3codernext/
Odd-Ordinary-5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxojf7
false
null
t3_1qxojf7
/r/LocalLLaMA/comments/1qxojf7/17_tokenss_fully_offloaded_to_cpu_qwen3codernext/
false
false
self
1
null
Yeah yeah the formatting was borked. Check it out if you want, or don't idc anymore.
0
ROCm 7.0.0 Update and Installer Enhancements It's been a bit since my last ROCm 7.0.0 update post, and a fair bit has changed with the stack since then. Figured I'd give y'all a rundown of what's new, especially since some of these changes have been pretty significant for how the whole stack works. Introducing the Rusty-Stack TUI Installer The Big One: Rusty-Stack TUI: So I went ahead and rewrote the whole curses-based Pvthon installer ir Rust. • The new Rusty-Stack TUI is now the primary installer, and it's much better than the old one • Proper hardware detection that actually figures out what you've got before trying to install anything • Pre-flight checks that catch common issues before they become problems • Interactive component selection - pick what you want, skip what you don't • Real-time progress feedback so you know what's actually happening • Built-in benchmarking dashboard to track performance before/afte updates • Recovery mode for when things go sideways Maintaining Backward Compatibility • The old Python installer still works (gotta maintain backward compatibility) • but the Rust TUI is the recommended way now ROCm Channel Selection • \*Multi-Channel ROCm Support:\*\* This is the other big change. Instead of just "ROCm 7.0.0 or nothing", you can now pick from three channels: • Legacy (ROCm 6.4.3) - Proven stability if you're on older RDNA 1/2 cards • Stable (ROCm 7.1) - Solid choice for RDNA 3 GPUs • Latest (ROCm 7.2) - Default option with expanded RDNA 4 support The installer will let you pick, or you can pre-seed it with • INSTALL\_ROCM\_PRESEEDED\_CHOICE if you're scripting things ROCm 7.10.0 Preview Exclusion \*Quick note on ROCm 7.10.0 Preview: I had initially included this as an option, but AMD moved it to "TheRock" distribution which is pip/tarball only - doesn't work with the standard amdgpu-install deb packages. So I pulled that option to avoid breaking people's installs. If you really want 7.10.0, you'll need to use AMD's official installation methods for now.\* Integration with ML Tools • \*\*All the Multi-Channel Helpers: \*\* One ROCm channel doesn't help much if all your ML tools are built for a ROCm Component Installation Scripts • install\_pytorch\_multi.sh - PyTorch wheels for your chosen ROCm version • install\_triton\_multi.sh - Triton compiler with ROCm-specific builds • build flash attn amd.sh - Flash Attention with channel awareness • install\_vllm\_multi.sh - vLLM matching vour ROCm instal • build\_onnxruntime\_multi.sh - ONNX Runtime with ROCm support • install\_migraphx\_multi.sh -AMD's graph optimization library • install\_bitsandbytes\_multi.sh - Quantization tools • install\_rccl\_multi.sh - Collective communications library Environment Variable Synchronization • All of these respect your ROCM\_CHANNEL and ROCM\_VERSION env vars now, so everything stays in sync. Introducing vLLM Studio for LLM Inference Management • \*New Stuff!: vLLM Studio\*\* • This one's pretty cool if vou're runnina LLM inference - there's now a vLLM Studio installer that sets up a web UI for managing your vLLM models and deployments. • It's from https://github.com/0xSero/vllm-studio if you want to check it out directly Installer and Package Management • The installer handles cloning the repo, setting up the backend, building the frontend, and even creates a shim so you can just run vllm-studio to start it UV Package Management • The stack now uses UV by default for Python dependencies, and its just better than pip. Project Rebranding and Naming Conventions • Rebranding (Sort Of): • The project is gradually becoming "Rusty Stack" to reflect the new Rust-based installer and the impending refactoring of all shell scripts to rust but the Python package is still stan-s-ml-stack for backward compatibility. • The GitHub repo will probably stay as-is for a while too - no sense breaking everyone's links Installation Methods • Quick Install:\* • #Clone the repo • git clone https://github.com/scooter-lacroix/Stan-s-ML-Stack.qi • cd Stan-s-ML-Stack • # Run the Rusty-Stack TUI • ./scripts/run\_rusty\_stack.sh • Or the one-liner still works if you just want to get going • curl -fsSL • https://raw.aithubusercontent.com/scooter-lacroix/Stan-s-ML-Stack/main /scripts/install.sh|bash • \*TL:DR:\*\* Key Improvements and Features • Multi-channel support means you're not locked into one ROCm versior anymore • The Rust TUI is noticeably snappier than the old Python U • UV package management cuts install time down quite a bit • VLLM Studio makes inference way more user-friendly • Environment variable handling is less janky across the boarc Ongoing Development: Flash Attention • Still working on Flash Attention CK (the Composable Kernel variant) - it's ir pre-release testing and has been a bit stubborn, but the Triton-based Flash Attention is solid and performing well Resource Links • Links: • GitHub: https://aithub.com/scooter-lacroix/Stan-s-ML-Stack • Multi-channel guide is in the repo at docs/MULTI\_CHANNEL\_GUIDE.mo Operational Guidance and Recommendations • Tips: Pick your ROCm channel based on what you actually need - defaults to Latest The TUI will tell you if something looks wrong before it starts installing - pay attention to the pre-flight checks (press esc and run pre-flight checks again to be certain failures and issues are up to date) • If vou're on RDNA 4 cards, the Latest channel is your best bet right now Anyway, hope this helps y'all get the most out of your AMD GPUs. Stay filthy ya animals.
2026-02-06T17:47:02
https://i.redd.it/3p89cuv7wwhg1.jpeg
Doogie707
i.redd.it
1970-01-01T00:00:00
0
{}
1qxohrd
false
null
t3_1qxohrd
/r/LocalLLaMA/comments/1qxohrd/yeah_yeah_the_formatting_was_borked_check_it_out/
false
false
https://b.thumbs.redditm…uxBBdaIKRGqw.jpg
0
{'enabled': True, 'images': [{'id': 'feoWSZgXxMhz4v8HkmPaSPD4_9ZZKkvSiIHSmsY6SKM', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?width=108&crop=smart&auto=webp&s=2ffebe46d10c65a43c7d9575c523e726dae3a64e', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?width=216&crop=smart&auto=webp&s=70640ca8f1c00aff1d98a212829ea877f8e598de', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?width=320&crop=smart&auto=webp&s=26f1d1ae6e0d8d7d4b8597430491ee0227b0f547', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?width=640&crop=smart&auto=webp&s=0c397f6e835c9798c8c0d86f7fd8c3d443f68199', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?width=960&crop=smart&auto=webp&s=5346b0d1e6590f5cf54ba7beca90d346226128a2', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?width=1080&crop=smart&auto=webp&s=35960c929e25cfcab215539f7028df4b2d871ac8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/3p89cuv7wwhg1.jpeg?auto=webp&s=4578102d397cf1501318ef2feb8f0012dfea9a56', 'width': 1080}, 'variants': {}}]}
Running distilled FinancialBERT on a $5 VPS (CPU-only)
7
I was bored so I built a financial sentiment scanner, but I refused to pay for GPU hosting or expensive APIs. I managed to fit the entire pipeline (scraping, inference, database, web server) onto my VPS. The Optimization Stack: * Model**:** FinancialBERT (Distilled & Quantized to Int8). * Runtime**:** ONNX Runtime (CPU execution provider). * Memory**:** The entire app runs in close to 1 GB memory. The Result: It scrapes headlines, classifies sentiment in real-time, and pushes updates via websockets without choking the server. You can check it here: Live: [https://trendscope.akamaar.dev/](https://trendscope.akamaar.dev/) Repo: [https://github.com/MohammedEAbdelAziz/TrendScope](https://github.com/MohammedEAbdelAziz/TrendScope) Would love any feedback.
2026-02-06T17:38:57
https://i.redd.it/6zjpf5xttwhg1.png
slm2l
i.redd.it
1970-01-01T00:00:00
0
{}
1qxo9ig
false
null
t3_1qxo9ig
/r/LocalLLaMA/comments/1qxo9ig/running_distilled_financialbert_on_a_5_vps_cpuonly/
false
false
https://b.thumbs.redditm…Z0KUNbGBwGVs.jpg
7
{'enabled': True, 'images': [{'id': 'GMRbV8_KDkdDt5Dv7W4qnYWeiJU3ozApsvxuDw-Wy9Y', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?width=108&crop=smart&auto=webp&s=6816879533e53ab2622e489f40f5f981ad662a6e', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?width=216&crop=smart&auto=webp&s=eca4ccd8b0be76deaf7e09718f12fc3668bb55e3', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?width=320&crop=smart&auto=webp&s=cea62d58bd8a3a7ca3a63bfa6fff76bae93bd742', 'width': 320}, {'height': 340, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?width=640&crop=smart&auto=webp&s=b20c0210a086a473bd7fc8a5d305b99d1fb35273', 'width': 640}, {'height': 511, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?width=960&crop=smart&auto=webp&s=b62873d33be62479955743943637129b0a6795e5', 'width': 960}, {'height': 575, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?width=1080&crop=smart&auto=webp&s=f85a3e3859ff410157a0bd2aee647013d8bb118c', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://preview.redd.it/6zjpf5xttwhg1.png?auto=webp&s=bc71575777cf9b415188cb1274957f248c6c1d0a', 'width': 1900}, 'variants': {}}]}
Help a newb
0
In the midst of the Epstein files being released and trying to figure out wtf were supposed to do, i am now trying to get a local model running on my z fold 7 (gen 8 vers 3 chip I think) I use gemini and perplexity a lot but I am lost when it starts getting technical like using terminals or super complicated tools and language. Any tips and help on this and any other ways to be decentralized and have more sovereignty is greatly appreciated 👍
2026-02-06T17:25:05
https://www.reddit.com/r/LocalLLaMA/comments/1qxnvls/help_a_newb/
sagemasterprince
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxnvls
false
null
t3_1qxnvls
/r/LocalLLaMA/comments/1qxnvls/help_a_newb/
false
false
self
0
null
10 TPS explained: decode speed on 8th-gen i3 UHD 620 (bench details)
0
Clarification for anyone confused about the “10 TPS” claim This is decode speed (generation speed), not prompt prefill Measured after prefill, fixed max\_tokens=256 Context size: n\_ctx=4096 Results averaged over 10 runs CPU avg decode: \~9.57 tok/s iGPU (UHD 620, n\_gpu\_layers=1) avg decode: \~9.6 tok/s When I say “\~10 TPS”, I’m referring only to decode TPS, not first-token latency or large-context prefill performance.
2026-02-06T17:16:21
https://www.reddit.com/r/LocalLLaMA/comments/1qxcm5g/no_nvidia_no_problem_my_2018_potato_8th_gen_i3/o3xi4tl/
RelativeOperation483
reddit.com
1970-01-01T00:00:00
0
{}
1qxnms6
false
null
t3_1qxnms6
/r/LocalLLaMA/comments/1qxnms6/10_tps_explained_decode_speed_on_8thgen_i3_uhd/
false
false
default
0
null
Is anyone actually standardizing AI context for teams or is it just messy for everyone?
1
When I use Claude Code solo I can muscle through the rough edges. In a team those same rough edges have been plaguing me in review. The agent ships something that looks right but then you find out it violated a constraint that was obvious to a human or it made a tradeoff nobody would have picked especially when doing iterations. I’m not looking for prompt tips. I’m trying to move the problem "up a level" into team agreements, if anyone has insight. Do you guys standardize any of this, and if so what actually sticks when the team is busy?
2026-02-06T17:14:07
https://www.reddit.com/r/LocalLLaMA/comments/1qxnkhv/is_anyone_actually_standardizing_ai_context_for/
eastwindtoday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxnkhv
false
null
t3_1qxnkhv
/r/LocalLLaMA/comments/1qxnkhv/is_anyone_actually_standardizing_ai_context_for/
false
false
self
1
null
To my AMD Boys, this one's for you
0
Hey! It's been a bit since my last ROCm 7.0.0 update post, and a fair bit has changed with the stack since then. Figured I'd give y'all a rundown of what's new, especially since some of these changes have been pretty significant for how the whole stack works. \\\*\\\*The Big One: Rusty-Stack TUI\\\*\\\* So I went ahead and rewrote the whole curses-based Python installer in Rust. The new Rusty-Stack TUI is now the primary installer, and it's much better than the old one: \\- Proper hardware detection that actually figures out what you've got before trying to install anything \\- Pre-flight checks that catch common issues before they become problems \\- Interactive component selection - pick what you want, skip what you don't \\- Real-time progress feedback so you know what's actually happening \\- Built-in benchmarking dashboard to track performance before/after updates \\- Recovery mode for when things go sideways The old Python installer still works (gotta maintain backward compatibility), but the Rust TUI is the recommended way now. \\\*\\\*Multi-Channel ROCm Support:\\\*\\\* This is the other big change. Instead of just "ROCm 7.0.0 or nothing", you can now pick from three channels: \\- Legacy (ROCm 6.4.3) - Proven stability if you're on older RDNA 1/2 cards \\- Stable (ROCm 7.1) - Solid choice for RDNA 3 GPUs \\- Latest (ROCm 7.2) - Default option with expanded RDNA 4 support The installer will let you pick, or you can pre-seed it with INSTALL\\\_ROCM\\\_PRESEEDED\\\_CHOICE if you're scripting things. \\\*Quick note on ROCm 7.10.0 Preview: I had initially included this as an option, but AMD moved it to "TheRock" distribution which is pip/tarball only - doesn't work with the standard amdgpu-install deb packages. So I pulled that option to avoid breaking people's installs. If you really want 7.10.0, you'll need to use AMD's official installation methods for now.\\\* \\\*\\\*All the Multi-Channel Helpers:\\\*\\\* One ROCm channel doesn't help much if all your ML tools are built for a different version, so I went through and updated basically everything: \\- install\\\_pytorch\\\_multi.sh - PyTorch wheels for your chosen ROCm version \\- install\\\_triton\\\_multi.sh - Triton compiler with ROCm-specific builds \\- build\\\_flash\\\_attn\\\_amd.sh - Flash Attention with channel awareness \\- install\\\_vllm\\\_multi.sh - vLLM matching your ROCm install \\- build\\\_onnxruntime\\\_multi.sh - ONNX Runtime with ROCm support \\- install\\\_migraphx\\\_multi.sh - AMD's graph optimization library \\- install\\\_bitsandbytes\\\_multi.sh - Quantization tools \\- install\\\_rccl\\\_multi.sh - Collective communications library All of these respect your ROCM\\\_CHANNEL and ROCM\\\_VERSION env vars now, so everything stays in sync. \\\*\\\*New Stuff!: vLLM Studio\\\*\\\* This one's pretty cool if you're running LLM inference - there's now a vLLM Studio installer that sets up a web UI for managing your vLLM models and deployments. It's from \[https://github.com/0xSero/vllm-studio\](https://github.com/0xSero/vllm-studio) if you want to check it out directly. The installer handles cloning the repo, setting up the backend, building the frontend, and even creates a shim so you can just run vllm-studio to start it. UV Package Management The stack now uses UV by default for Python dependencies, and its just better than pip. \\\*\\\*Rebranding (Sort Of):\\\*\\\* The project is gradually becoming "Rusty Stack" to reflect the new Rust-based installer and the impending refactoring of all shell scripts to rust, but the Python package is still stan-s-ml-stack for backward compatibility. The GitHub repo will probably stay as-is for a while too - no sense breaking everyone's links. \\\*Quick Install:\\\* \\# Clone the repo git clone \[https://github.com/scooter-lacroix/Stan-s-ML-Stack.git\](https://github.com/scooter-lacroix/Stan-s-ML-Stack.git) cd Stan-s-ML-Stack \\# Run the Rusty-Stack TUI ./scripts/run\\\_rusty\\\_stack.sh Or the one-liner still works if you just want to get going: curl -fsSL \[https://raw.githubusercontent.com/scooter-lacroix/Stan-s-ML-Stack/main/scripts/install.sh\](https://raw.githubusercontent.com/scooter-lacroix/Stan-s-ML-Stack/main/scripts/install.sh) | bash \\\*\\\*TL:DR:\\\*\\\* \\- Multi-channel support means you're not locked into one ROCm version anymore \\- The Rust TUI is noticeably snappier than the old Python UI \\- UV package management cuts install time down quite a bit \\- vLLM Studio makes inference way more user-friendly \\- Environment variable handling is less janky across the board Still working on Flash Attention CK (the Composable Kernel variant) - it's in pre-release testing and has been a bit stubborn, but the Triton-based Flash Attention is solid and performing well. \\--- Links: \\- GitHub: \[https://github.com/scooter-lacroix/Stan-s-ML-Stack\](https://github.com/scooter-lacroix/Stan-s-ML-Stack) \\- Multi-channel guide is in the repo at docs/MULTI\\\_CHANNEL\\\_GUIDE.md Tips: \\- Pick your ROCm channel based on what you actually need - defaults to Latest \\- The TUI will tell you if something looks wrong before it starts installing - pay attention to the pre-flight checks (press esc and run pre-flight checks again to be certain failures and issues are up to date) \\- If you're on RDNA 4 cards, the Latest channel is your best bet right now Anyway, hope this helps y'all get the most out of your AMD GPUs. Stay filthy, ya animalsHey! It's been a bit since my last ROCm 7.0.0 update post, and a fair bit has changed with the stack since then. Figured I'd give y'all a rundown of what's new, especially since some of these changes have been pretty significant for how the whole stack works. \\\*\\\*The Big One: Rusty-Stack TUI\\\*\\\* So I went ahead and rewrote the whole curses-based Python installer in Rust. The new Rusty-Stack TUI is now the primary installer, and it's much better than the old one: \\- Proper hardware detection that actually figures out what you've got before trying to install anything \\- Pre-flight checks that catch common issues before they become problems \\- Interactive component selection - pick what you want, skip what you don't \\- Real-time progress feedback so you know what's actually happening \\- Built-in benchmarking dashboard to track performance before/after updates \\- Recovery mode for when things go sideways The old Python installer still works (gotta maintain backward compatibility), but the Rust TUI is the recommended way now. \\\*\\\*Multi-Channel ROCm Support:\\\*\\\* This is the other big change. Instead of just "ROCm 7.0.0 or nothing", you can now pick from three channels: \\- Legacy (ROCm 6.4.3) - Proven stability if you're on older RDNA 1/2 cards \\- Stable (ROCm 7.1) - Solid choice for RDNA 3 GPUs \\- Latest (ROCm 7.2) - Default option with expanded RDNA 4 support The installer will let you pick, or you can pre-seed it with INSTALL\\\_ROCM\\\_PRESEEDED\\\_CHOICE if you're scripting things. \\\*Quick note on ROCm 7.10.0 Preview: I had initially included this as an option, but AMD moved it to "TheRock" distribution which is pip/tarball only - doesn't work with the standard amdgpu-install deb packages. So I pulled that option to avoid breaking people's installs. If you really want 7.10.0, you'll need to use AMD's official installation methods for now.\\\* \\\*\\\*All the Multi-Channel Helpers:\\\*\\\* One ROCm channel doesn't help much if all your ML tools are built for a different version, so I went through and updated basically everything: \\- install\\\_pytorch\\\_multi.sh - PyTorch wheels for your chosen ROCm version \\- install\\\_triton\\\_multi.sh - Triton compiler with ROCm-specific builds \\- build\\\_flash\\\_attn\\\_amd.sh - Flash Attention with channel awareness \\- install\\\_vllm\\\_multi.sh - vLLM matching your ROCm install \\- build\\\_onnxruntime\\\_multi.sh - ONNX Runtime with ROCm support \\- install\\\_migraphx\\\_multi.sh - AMD's graph optimization library \\- install\\\_bitsandbytes\\\_multi.sh - Quantization tools \\- install\\\_rccl\\\_multi.sh - Collective communications library All of these respect your ROCM\\\_CHANNEL and ROCM\\\_VERSION env vars now, so everything stays in sync. \\\*\\\*New Stuff!: vLLM Studio\\\*\\\* This one's pretty cool if you're running LLM inference - there's now a vLLM Studio installer that sets up a web UI for managing your vLLM models and deployments. It's from \[https://github.com/0xSero/vllm-studio\](https://github.com/0xSero/vllm-studio) if you want to check it out directly. The installer handles cloning the repo, setting up the backend, building the frontend, and even creates a shim so you can just run vllm-studio to start it. UV Package Management The stack now uses UV by default for Python dependencies, and its just better than pip. \\\*\\\*Rebranding (Sort Of):\\\*\\\* The project is gradually becoming "Rusty Stack" to reflect the new Rust-based installer and the impending refactoring of all shell scripts to rust, but the Python package is still stan-s-ml-stack for backward compatibility. The GitHub repo will probably stay as-is for a while too - no sense breaking everyone's links. \\\*Quick Install:\\\* \\# Clone the repo git clone \[https://github.com/scooter-lacroix/Stan-s-ML-Stack.git\](https://github.com/scooter-lacroix/Stan-s-ML-Stack.git) cd Stan-s-ML-Stack \\# Run the Rusty-Stack TUI ./scripts/run\\\_rusty\\\_stack.sh Or the one-liner still works if you just want to get going: curl -fsSL \[https://raw.githubusercontent.com/scooter-lacroix/Stan-s-ML-Stack/main/scripts/install.sh\](https://raw.githubusercontent.com/scooter-lacroix/Stan-s-ML-Stack/main/scripts/install.sh) | bash \\\*\\\*TL:DR:\\\*\\\* \\- Multi-channel support means you're not locked into one ROCm version anymore \\- The Rust TUI is noticeably snappier than the old Python UI \\- UV package management cuts install time down quite a bit \\- vLLM Studio makes inference way more user-friendly \\- Environment variable handling is less janky across the board Still working on Flash Attention CK (the Composable Kernel variant) - it's in pre-release testing and has been a bit stubborn, but the Triton-based Flash Attention is solid and performing well. \\--- Links: \\- GitHub: \[https://github.com/scooter-lacroix/Stan-s-ML-Stack\](https://github.com/scooter-lacroix/Stan-s-ML-Stack) \\- Multi-channel guide is in the repo at docs/MULTI\\\_CHANNEL\\\_GUIDE.md Tips: \\- Pick your ROCm channel based on what you actually need - defaults to Latest \\- The TUI will tell you if something looks wrong before it starts installing - pay attention to the pre-flight checks (press esc and run pre-flight checks again to be certain failures and issues are up to date) \\- If you're on RDNA 4 cards, the Latest channel is your best bet right now Anyway, hope this helps y'all get the most out of your AMD GPUs. Stay filthy, ya animals.
2026-02-06T17:05:07
https://i.redd.it/e1x0pnlqowhg1.gif
Doogie707
i.redd.it
1970-01-01T00:00:00
0
{}
1qxnbjm
false
null
t3_1qxnbjm
/r/LocalLLaMA/comments/1qxnbjm/to_my_amd_boys_this_ones_for_you/
false
false
https://b.thumbs.redditm…f-iKLSTjGtag.jpg
0
{'enabled': True, 'images': [{'id': '4KHa-MMwYt7BD_TB44MKCs1qB2uyOWowmnzWjb2YNsQ', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=108&crop=smart&format=png8&s=abb6cb9e583c448ccc9b264edac5e01ad0675940', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=216&crop=smart&format=png8&s=ffed0e12b1e4e9581c6733a505b181aad826b95b', 'width': 216}, {'height': 328, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=320&crop=smart&format=png8&s=761d5e760774d41dbd5c495443ce909768ebee50', 'width': 320}], 'source': {'height': 640, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?format=png8&s=de1e8d58f7d453b67491b3479bdaa58c8a88b893', 'width': 624}, 'variants': {'gif': {'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=108&crop=smart&s=89ef4de514da3f64664309dfb2c7ac91212cee51', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=216&crop=smart&s=3d388dd86f370d1484f2a6bbc91cfc283318fe27', 'width': 216}, {'height': 328, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=320&crop=smart&s=9d45bc3a2005929786423010815f8488412bacac', 'width': 320}], 'source': {'height': 640, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?s=9a01884001af63ce65480f9b80dbc65e6b175993', 'width': 624}}, 'mp4': {'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=108&format=mp4&s=af4e2f5824e247735ff5de1541da81c527d8f0d3', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=216&format=mp4&s=dd0567972f210f9018a1fd0a1d909b0fb298ed36', 'width': 216}, {'height': 328, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?width=320&format=mp4&s=df98fc21a35275c238f8390a1ebd9e5380484b2d', 'width': 320}], 'source': {'height': 640, 'url': 'https://preview.redd.it/e1x0pnlqowhg1.gif?format=mp4&s=bf7303118455b34b4e8be76c39338635832d525c', 'width': 624}}}}]}
Built a tiny fast go library for catching obvious prompt injections
1
I just pushed up this small go lib for defending against prompt injection that runs \~0.3ms: [https://github.com/danielthedm/promptsec](https://github.com/danielthedm/promptsec) I am working on my own project that does a lot of parsing and summarization of various documents and file types. As I started working with untrusted input, I started digging into prompt injection libraries. Being bootstrapped, I don't want to spend a ton of money on horizontal scaling right now, and processing so many files at once was getting backlogged when using a more comprehensive security product. To my surprise I couldn't find a super duper lightweight precheck for go to catch obvious prompt injections before escalating an obvious prompt injection attempt and spending $$ on the products I'm trialing. It's intended local pre-filter that catches a decent amount of prompt injection attacks in under 1ms with ideally no false positives. Doesn't make any API calls or have any external dependencies. The npm/python one's usually have the LLM as judge integrations so if you'd like to use this and add it feel free, I am just already using a second layer with Lakera so there wasn't a need. It runs pattern matching, sanitization, and similarity checks against most basic/common injection patterns locally before you ideally escalate. It's tested against a few of the open source prompt injection samples and was tuned for no false positives. I want to note, I am NOT a security engineer, just a full stack engineer that's being doing it a while so this is not likely comprehensive and is mostly a mix of some of my knowledge and point claude at some security papers.
2026-02-06T17:02:51
https://www.reddit.com/r/LocalLLaMA/comments/1qxn96h/built_a_tiny_fast_go_library_for_catching_obvious/
Neat_Confidence_4166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxn96h
false
null
t3_1qxn96h
/r/LocalLLaMA/comments/1qxn96h/built_a_tiny_fast_go_library_for_catching_obvious/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?width=108&crop=smart&auto=webp&s=3121968796743256e7bf8bb7c5dd5b6b71c51fef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?width=216&crop=smart&auto=webp&s=1b1a183e93ea1860c9daa3743a4e4e9a5c2f436d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?width=320&crop=smart&auto=webp&s=7e5b9cf79291c939199edf87f0a1fde610818f04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?width=640&crop=smart&auto=webp&s=1bd5c16a94401eddc880d104938f47f2a8e58cc3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?width=960&crop=smart&auto=webp&s=b7669bea8694345dd3b3da450153ec3d328596d2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?width=1080&crop=smart&auto=webp&s=241fca1a29fd0fb9f19a90d625722cdb4fbb303e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wq3dZPgwjUc86FVdr4Z2xH7jVcnoloQxhsdZoTp5jDw.png?auto=webp&s=9798f727849cc09a85da4dc3cebdd149fce7d7ce', 'width': 1200}, 'variants': {}}]}
Why can't Claude-Opus-4.6 learn to say 'I cannot do this' as fast as it learns to complete tasks? 67%→80% base, 52%→48% hallucination (from CAR-bench)
0
CAR-bench ([https://huggingface.co/papers/2601.22027](https://huggingface.co/papers/2601.22027)) base tasks: the LLM agent has to solve complex multi-step/multi-turn requests. Hallucination tasks: same requests, but necessary tool, tool parameter, or tool results are removed. The agent just needs to say "I can't do this" and it passes. Hypothesis: RLHF rewards task completion, so models learn refusing = failure. They'd rather fabricate than admit limitations. Has anyone seen work on training approaches that actually address this?
2026-02-06T17:02:30
https://i.redd.it/2pxie6h8lwhg1.png
Frosty_Ad_6236
i.redd.it
1970-01-01T00:00:00
0
{}
1qxn8sa
false
null
t3_1qxn8sa
/r/LocalLLaMA/comments/1qxn8sa/why_cant_claudeopus46_learn_to_say_i_cannot_do/
false
false
https://b.thumbs.redditm…nL7oUhGzdC5I.jpg
0
{'enabled': True, 'images': [{'id': 'dId_6gR69q3DOUHLCKaFzBoq5qrfw3XgXaYmgADaQwE', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/2pxie6h8lwhg1.png?width=108&crop=smart&auto=webp&s=e718007a8ae053d664551183e0d5904a08ff31ae', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/2pxie6h8lwhg1.png?width=216&crop=smart&auto=webp&s=7535e554c1b9b399ab761363eafc8a2eefb94d4e', 'width': 216}, {'height': 281, 'url': 'https://preview.redd.it/2pxie6h8lwhg1.png?width=320&crop=smart&auto=webp&s=53d2e30e27ad7995d59ea556d8d6a14bf7bb6c2b', 'width': 320}, {'height': 562, 'url': 'https://preview.redd.it/2pxie6h8lwhg1.png?width=640&crop=smart&auto=webp&s=ffc948382929d08ad941ee3b1c061e4cf1fa925e', 'width': 640}, {'height': 843, 'url': 'https://preview.redd.it/2pxie6h8lwhg1.png?width=960&crop=smart&auto=webp&s=b3a70e82ae663d883aad9a4d9dec7893be2ec27f', 'width': 960}], 'source': {'height': 879, 'url': 'https://preview.redd.it/2pxie6h8lwhg1.png?auto=webp&s=26e979cbebd4afd9016686d5c8e366b77478d565', 'width': 1000}, 'variants': {}}]}
I made an AI Jukebox with ACE-Step 1.5, free nonstop music and you can vote on what genre and topic should be generated next
16
Hi all, a few days ago, the ACE-step 1.5 music generation model was released. A day later, I made a one-click deploy template for runpod for it: [https://www.reddit.com/r/StableDiffusion/comments/1qvykjr/i\_made\_a\_oneclick\_deploy\_template\_for\_acestep\_15/](https://www.reddit.com/r/StableDiffusion/comments/1qvykjr/i_made_a_oneclick_deploy_template_for_acestep_15/) Now I vibecoded a fun little sideproject with it: an AI Jukebox. It's a simple concept: it generates nonstop music and people can vote for the genre and topic by sending a small bitcoin lightning payment. You can choose the amount yourself, the next genre and topic is chosen via weighted random selection based on how many sats it has received. I don't know how long this site will remain online, it's costing me about 10 dollars per day, so it will depend on whether people actually want to pay for this. I'll keep the site online for a week, after that, I'll see if it has any traction or not. So if you like this concept, you can help by sharing the link and letting people know about it. [https://ai-jukebox.com/](https://ai-jukebox.com/)
2026-02-06T16:57:42
https://ai-jukebox.com/
WouterGlorieux
ai-jukebox.com
1970-01-01T00:00:00
0
{}
1qxn3pa
false
null
t3_1qxn3pa
/r/LocalLLaMA/comments/1qxn3pa/i_made_an_ai_jukebox_with_acestep_15_free_nonstop/
false
false
default
16
null
Production architecture for multi-model agent orchestration: routing tasks to DeepSeek, Kimi, and Claude based on what each model is actually good at
0
ERROR: type should be string, got "https://preview.redd.it/8sber8twmwhg1.png?width=1767&format=png&auto=webp&s=71738b5edfdc0c7d1406276701ca7f962931788b\n\nHey r/LocalLLaMA. Nathan Sportsman here, founder of Praetorian (cybersecurity company). We just published a detailed architecture paper on the autonomous development platform we've been building internally. While the current system runs on Claude Code, a big chunk of the design is model-agnostic, and our roadmap is explicitly multi-model. Figured this crowd would have the most interesting takes on the routing and model selection pieces.\n\n**The thesis**\n\nThe bottleneck in autonomous development is not model intelligence. It's context management and architectural determinism. We kept throwing smarter models at problems that were actually caused by stuffing too much into the context window and hoping the LLM would follow instructions. Token usage explains \\~80% of performance variance in agent tasks. That means architecture matters more than model selection for most failures.\n\nBut once you solve the architecture problem, model selection becomes the *next* unlock. And that's where it gets interesting.\n\n**Heterogeneous model routing**\n\nNo single model is best at everything. Our roadmap uses a semantic routing layer (small, fast model as the router) that evaluates intent and dispatches to specialists:\n\n|Task|Model|Why|\n|:-|:-|:-|\n|Logic and reasoning|DeepSeek-R1 / V3|RL-based chain-of-thought for complex inference|\n|Document processing|DeepSeek OCR 2|10x token efficiency, visual causal flow for structural preservation|\n|UI/UX and frontend|Kimi 2.5|Native MoonViT architecture, autonomous visual debugging loops|\n|Parallel research|Kimi 2.5 Swarm|PARL-driven optimization across up to 100 agents|\n|Massive repo mapping|DeepSeek-v4 Engram|O(1) constant-time lookup, tiered KV cache for million-token context|\n\nThe point is that expensive frontier models should be reserved for tasks that actually need them. A code review agent doesn't need the same model as an architectural reasoning agent.\n\n**The architecture that makes routing possible**\n\nThe reason you can swap models per-task is that agents are stateless, ephemeral, and under 150 lines. They don't carry history. They don't accumulate context from sibling agents. Every spawn gets a clean window with only the context it needs, loaded just-in-time through gateway routers.\n\nKey patterns:\n\n* **Thin agents, fat platform.** All knowledge lives in a skill library outside the agent. Agents call a gateway (e.g., `gateway-frontend`) that detects intent and loads only the relevant patterns. No model needs to hold the full knowledge base.\n* **Deterministic hooks over prompts.** Shell scripts on lifecycle events (`PreToolUse`, `PostToolUse`, `Stop`) enforce quality gates the LLM can't override. Dirty bit tracking, test verification before exit, context compaction gates. This is the layer that makes the system model-agnostic. The enforcement logic doesn't care which model is running.\n* **Coordinators vs. executors.** Tool permissions are mutually exclusive. Agents that plan can't edit code. Agents that code can't spawn sub-agents. This separation means you can run a cheap model as the coordinator and an expensive model as the executor without worrying about role confusion.\n* **MCP tool wrappers.** Raw MCP connections were eating 71,800 tokens at startup (36% of context) across five servers. We replaced them with on-demand TypeScript wrappers. Zero tokens at startup, Zod-validated, response-filtered. This token savings matters even more when you're running smaller context windows on local models.\n\n**The DeepSeek parallel**\n\nThere's a line in the paper I think this sub will appreciate: \"Like what DeepSeek is proving to the Frontier Models, I'm not sure the expensive way is the best way anymore. The problem with capital is that it allows you to do a lot of stupid things very fast. We do not have that luxury. We must be clever instead.\"\n\nThe whole architecture is designed around the constraint that we can't just throw money at the problem. Deterministic hooks, JIT context loading, aggressive token hygiene. All of it came from needing to do more with less.\n\n**Self-annealing (roadmap)**\n\nWhen agents repeatedly fail quality gates, a meta-agent rewrites the skills and hooks that allowed the failure. The system patches its own prompt engineering. This is model-agnostic by design. The meta-agent could be any model with sufficient reasoning capability.\n\n**Escalation advisor**\n\nWhen an agent gets stuck in a loop (same failing fix three times), a hook calls an external, cheaper model (they mention Gemini/Ollama) with the transcript and asks for a one-sentence hint. That hint gets injected into the main context to break the deadlock. Great use case for a local model.\n\nFull paper: [https://www.praetorian.com/blog/deterministic-ai-orchestration-a-platform-architecture-for-autonomous-development/](https://www.praetorian.com/blog/deterministic-ai-orchestration-a-platform-architecture-for-autonomous-development/)\n\nCurious what this sub thinks about the routing matrix. What models would you slot into which roles? Anyone running similar multi-model orchestration setups locally?"
2026-02-06T16:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1qxn1gu/production_architecture_for_multimodel_agent/
Praetorian_Security
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxn1gu
false
null
t3_1qxn1gu
/r/LocalLLaMA/comments/1qxn1gu/production_architecture_for_multimodel_agent/
false
false
https://preview.redd.it/…09095a696ab0d8f2
0
null
[Project] Kremis: A deterministic "Sidecar" Graph Engine to stop LLM hallucinations (WIP v0.2.0)
0
Hi everyone, I’m currently developing **Kremis**, a project aimed at creating a deterministic grounding layer for LLMs. I wanted to share the current progress with this community to get some feedback on the architectural approach. **The Problem:** Even with RAG, local LLMs are probabilistic by nature and can hallucinate relationships between data points. I wanted to explore a way to enforce strict, deterministic rules on top of the inference process. **The Solution:** Kremis is a lightweight Cognitive Substrate built in **Rust**. It is designed to act as a "Sidecar" for AI agents. Instead of letting the LLM purely predict the next token based on probability, Kremis provides a rigid, ACID-compliant graph database (using redb) to validate facts. * **Logic:** It stores and associates entities through explicit edges. \* **Verification:** It pulls only verified data from the graph. \* **Safety:** If a relationship is not explicitly defined in the substrate, the system is designed to return a null/unknown state rather than a guess. **Why I'm building this:** I am a student working on this as a personal research project, focusing on the intersection of AI orchestration and data governance. I am using AI assistance as a co-pilot to help implement the Rust core, allowing me to focus on the high-level architecture and the determinism logic. **Current Status (Work in Progress):** * **Core:** Rust engine + redb storage (functional but under active development). \* **API:** Initial HTTP interface is up and running. **Repository:** [https://github.com/M2Dr3g0n/kremis](https://github.com/M2Dr3g0n/kremis) I'd love to hear your thoughts on this "Sidecar" approach. Do you think a deterministic graph layer is a viable path for increasing the reliability of local agents?
2026-02-06T16:52:28
https://www.reddit.com/r/LocalLLaMA/comments/1qxmyfa/project_kremis_a_deterministic_sidecar_graph/
M2Dr3g0n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxmyfa
false
null
t3_1qxmyfa
/r/LocalLLaMA/comments/1qxmyfa/project_kremis_a_deterministic_sidecar_graph/
false
false
self
0
{'enabled': False, 'images': [{'id': 'IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?width=108&crop=smart&auto=webp&s=ce175368a16a0def279f824e183656f4f86684f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?width=216&crop=smart&auto=webp&s=5752b6544a0007109f682c1554f718726e667273', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?width=320&crop=smart&auto=webp&s=c226d6a4754c9ecd1e124255ef9a7ff3f73f5e69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?width=640&crop=smart&auto=webp&s=286c2fae03a8e36099c16d2289969486a202865b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?width=960&crop=smart&auto=webp&s=f7836dc934450700ca1fc04abff6fb6dae16f366', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?width=1080&crop=smart&auto=webp&s=444a60d4faaa2c29cee7560d046e7c95378839af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IrbHxk0dsvQBFztLwwIkU5n7rC0m5_UbrfZlVFGaKds.png?auto=webp&s=9ebc13fb07c736e4ede324db3d26f3e20e14eb81', 'width': 1200}, 'variants': {}}]}
what is this and how does mistral manage it
0
\\ https://preview.redd.it/yjszrdkqlwhg1.png?width=1422&format=png&auto=webp&s=5c110297c9d44094f651524c4604bf808a4196f0 https://preview.redd.it/i0v5qawqlwhg1.png?width=1346&format=png&auto=webp&s=059f0d50b2dffc57b326d745167c6139cdb4fa9b
2026-02-06T16:48:53
https://www.reddit.com/r/LocalLLaMA/comments/1qxmuw6/what_is_this_and_how_does_mistral_manage_it/
No_Disk_6915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxmuw6
false
null
t3_1qxmuw6
/r/LocalLLaMA/comments/1qxmuw6/what_is_this_and_how_does_mistral_manage_it/
false
false
https://b.thumbs.redditm…m8MzSlv_vtKY.jpg
0
null
I’m so hyped! Cooking my local llm on a base Mac mini!
13
Trying with Lora technique to teach it a new persona ! I’m so excited I can do this!! Any other ideas what else can someone train a local llm on? Look at my macmon resources, it’s cooking hard it’s gonna blow up hahahaha
2026-02-06T16:48:50
https://i.redd.it/zfdp77vtlwhg1.jpeg
No_Astronaut873
i.redd.it
1970-01-01T00:00:00
0
{}
1qxmuup
false
null
t3_1qxmuup
/r/LocalLLaMA/comments/1qxmuup/im_so_hyped_cooking_my_local_llm_on_a_base_mac/
false
false
https://preview.redd.it/…78c23bddde0c7573
13
{'enabled': True, 'images': [{'id': 'zfdp77vtlwhg1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?width=108&crop=smart&auto=webp&s=cbdf3b58f20f74548b1915ab3714298a56d5aca8', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?width=216&crop=smart&auto=webp&s=5c0674574ca8a60b111175cd260252bb48cb9011', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?width=320&crop=smart&auto=webp&s=d4b6f805a7927d402124f882aca09d8c266ab1ec', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?width=640&crop=smart&auto=webp&s=20f94e9583054770c4a5fb42fc514c5e90e302d2', 'width': 640}, {'height': 719, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?width=960&crop=smart&auto=webp&s=8e96c9da5c6817097408acb3fa9a64167ae1ada7', 'width': 960}, {'height': 809, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?width=1080&crop=smart&auto=webp&s=6b2c29f72d51d8fc9750c4380fc1ae1f37980d27', 'width': 1080}], 'source': {'height': 1685, 'url': 'https://preview.redd.it/zfdp77vtlwhg1.jpeg?auto=webp&s=350660a41d89401b1c13c11215a370815cd95c15', 'width': 2247}, 'variants': {}}]}
How do I use Claude Agent Swarm but Locally?
6
>Claude Code with Qwen3 Next and 4 swarm locally on Mac Mini with vLLM How do I do setup **Claude Code Router** to connect 4 separate **llama-servers?** * 4 GPU - each with Qwen3 Next mxfp4 GGUF * 4 copies of llama-server --port 8000 to 8003
2026-02-06T16:46:12
https://i.redd.it/1r4xiheekwhg1.jpeg
ClimateBoss
i.redd.it
1970-01-01T00:00:00
0
{}
1qxms7r
false
null
t3_1qxms7r
/r/LocalLLaMA/comments/1qxms7r/how_do_i_use_claude_agent_swarm_but_locally/
false
false
https://b.thumbs.redditm…mSxmZoJRCfyc.jpg
6
{'enabled': True, 'images': [{'id': 'cOW4Q_GAxp-fN5EEoFILKGoJtwR0-IHUkDhNJKWyFWI', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?width=108&crop=smart&auto=webp&s=c226b1a2f988ba87eff5ab1ace60499598c33f8a', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?width=216&crop=smart&auto=webp&s=32aed973ceba04f661220993c3d9eb3cd55da993', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?width=320&crop=smart&auto=webp&s=a36d5b4c400345f3e44eefe50197bc65748b651f', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?width=640&crop=smart&auto=webp&s=b79b4f8fbeb5986f469715da3883c1ef27258048', 'width': 640}, {'height': 623, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?width=960&crop=smart&auto=webp&s=c1a690271739a4e7d04eaabc4cc704d377985eae', 'width': 960}, {'height': 701, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?width=1080&crop=smart&auto=webp&s=080ff80cb018484dec38885081e8d9b2f880a7c2', 'width': 1080}], 'source': {'height': 831, 'url': 'https://preview.redd.it/1r4xiheekwhg1.jpeg?auto=webp&s=443b4bb3d728fef9fd79f92006b25da3e2048c0a', 'width': 1280}, 'variants': {}}]}
Trying to build a serious local AI workflow, need real-world advice
3
I’m trying to figure out the best possible “vibe coding” workflow right now and could use advice from people actually building things. I’m new to local AI, but not new to tech. I’ve got experience with HTML, CSS, and JavaScript, and I mostly use ChatGPT today as a replacement for Google, brainstorming partner, and “help me think this through” tool. That works great, until it doesn’t. What I’m aiming for: * A fast workflow for coding, research, and brainstorming * Local-first where possible * A privacy layer between local and public models so ideas, filenames, and personal context don’t leak * Something I can actually iterate with without the model getting rigid or breaking my project My current setup: * Local AI node: 64GB RAM, RTX 3090, Ryzen 5700X3D, 2TB NVMe * Optional extra GPU: RTX 3060 12GB * Unraid server with \~80TB free space * Two Mac mini M4s (16GB each) * Gaming/HTPC box (3070 + 5600X) What I’ve tried so far: * OpenWebUI + Ollama * LM Studio * ChatGPT for most coding and tooling experiments I built a local file moving and renaming app with ChatGPT as a test. Basically wanted to see if I could make my own version of filebot but with some tweaks. It technically works, but iterating on it was painful. Once things got complex, ChatGPT became rigid, broke the code, and couldn’t recover cleanly. I wasn’t using version control at the time, which didn’t help. Haven't really used a version control system in over 15 years. Need to get on github and figure all of that out tbh. What I’m trying to decide now: * Should I be leaning harder into local models, or hybrid local + cloud? * Is there a sane way to put a privacy filter between my local tools and public APIs? * Is Claude (especially Claude Code) meaningfully better for iterative coding workflows? * How are people actually wiring this together day to day? I’m not trying to monetize apps right now. I just want a setup where I can reliably turn ideas into working tools without fighting the assistant or leaking context I care about. If you’ve built a workflow you actually like, I’d love to hear what’s working and what you’d do differently if you were starting over. **TL;DR:** Decent hardware, new to local AI. ChatGPT is great until projects get complex. Looking for a sane local or hybrid coding workflow with privacy in mind. What’s actually working for people building tools?
2026-02-06T16:37:33
https://www.reddit.com/r/LocalLLaMA/comments/1qxmjn5/trying_to_build_a_serious_local_ai_workflow_need/
cuberhino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxmjn5
false
null
t3_1qxmjn5
/r/LocalLLaMA/comments/1qxmjn5/trying_to_build_a_serious_local_ai_workflow_need/
false
false
self
3
null
Indexed 10,000+ PDFs for a 100% offline Local AI Library. Here’s what I learned about Hardware and Vector Noise.
0
Hi everyone, I just finished building a massive, fully private "Alexandria Library" using **AnythingLLM** and **Ollama**. Indexing over 10,000 documents (technical manuals & research papers) was a huge learning curve, especially regarding hardware limits and retrieval accuracy. **Quick Takeaways for Local RAG at Scale:** * **The 32GB RAM Threshold:** If you’re scaling past 5,000 docs, 16GB RAM starts swapping to disk, making retrieval sluggish. 32GB is the sweet spot for keeping the vector index "warm." * **Embedding Accuracy:** I switched to `mxbai-embed-large`. Smaller models were causing too many "hallucinations" when connecting dots between older and newer papers. * **Vector Noise:** Dumping everything into one workspace is a mistake. Segmenting into thematic workspaces significantly improved the AI's focus. * **Citations:** I had to fine-tune the System Prompt to force the AI to cite specific file names and page numbers, which is crucial when you have this much data. I’ve shared the full technical breakdown, the specific system prompts I used, and the hardware optimization steps I took to make this run smoothly.
2026-02-06T16:34:51
https://www.reddit.com/r/LocalLLaMA/comments/1qxmh1t/indexed_10000_pdfs_for_a_100_offline_local_ai/
NGU-FREEFIRE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxmh1t
false
null
t3_1qxmh1t
/r/LocalLLaMA/comments/1qxmh1t/indexed_10000_pdfs_for_a_100_offline_local_ai/
false
false
self
0
null
I compared GPT-5.3 Codex and Claude Opus 4.6 with identical prompts - wildly different design outputs
0
Gave both models the same frontend prompt, and the results are interesting. Codex went neo-brutalism with more creative design choices. Opus did polished purple gradients - clean but very "AI-coded" looking. Same input, completely different design philosophies. Did a full write-up with benchmarks and more comparisons: [Read here ](https://medium.com/@itsvksharma_/openai-vs-anthropic-the-20-minute-ai-release-battle-2c2bfdee9349) Has anyone else noticed the models developing distinct "styles"?
2026-02-06T16:21:17
https://www.reddit.com/r/LocalLLaMA/comments/1qxm3if/i_compared_gpt53_codex_and_claude_opus_46_with/
narutomax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxm3if
false
null
t3_1qxm3if
/r/LocalLLaMA/comments/1qxm3if/i_compared_gpt53_codex_and_claude_opus_46_with/
false
false
self
0
null
I built an MCP server that scans Claude's code output for securities vulnerabilities in real-time
0
Interesting attack vector I've been researching: LLMs sometimes "hallucinate" package names that don't exist. Attackers can then register those names with malicious code. Built an MCP server that: 1. Verifies packages actually exist before you install them 2. Checks against 4.3M+ real packages (npm, PyPI, RubyGems, crates.io, pub.dev, CPAN) 3. Uses bloom filters for fast local lookups (no API calls) Also does general security scanning - 275 rules for SQL injection, XSS, secrets etc. The hallucination detection caught me trying to install 3 fake packages in one week that Claude suggested. All would have been supply chain attack vectors. Works with any MCP-compatible client (Claude, Cursor, etc.) npx agent-security-scanner-mcp init Anyone else run into hallucinated packages?
2026-02-06T16:19:54
https://www.reddit.com/r/LocalLLaMA/comments/1qxm27d/i_built_an_mcp_server_that_scans_claudes_code/
NoButterfly9145
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxm27d
false
null
t3_1qxm27d
/r/LocalLLaMA/comments/1qxm27d/i_built_an_mcp_server_that_scans_claudes_code/
false
false
self
0
null
Agent Presets for OpenClaw — works with any model
1
[removed]
2026-02-06T16:18:56
https://www.reddit.com/r/LocalLLaMA/comments/1qxm17r/agent_presets_for_openclaw_works_with_any_model/
DisastrousMention416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxm17r
false
null
t3_1qxm17r
/r/LocalLLaMA/comments/1qxm17r/agent_presets_for_openclaw_works_with_any_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=108&crop=smart&auto=webp&s=210969840104fefe5a740c14a049ba6ae9f4da1a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=216&crop=smart&auto=webp&s=4884c88257a74f96353b7ca71d7749b6b7408185', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=320&crop=smart&auto=webp&s=6767f329a451c7b10e4b36109b3f7ce919c6c511', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=640&crop=smart&auto=webp&s=bcb0d160a488e8838d6bd1de9314d5614095d98a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=960&crop=smart&auto=webp&s=d51c3521f7164a737cdf1eaf37fe880d9b4b6f45', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=1080&crop=smart&auto=webp&s=4d3aa798813a7bdaf4f1915a05cc71f6345b0d17', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?auto=webp&s=ee4222e7ba222f9a3ab6fecbcc8435b9c9c571aa', 'width': 1200}, 'variants': {}}]}
Best single-pane benchmark for inference
2
What’s the best single pane resource/benchmark you’ve seen for LLMs/VLM servers like vLLM/SGLang (especially centered around cost/throughput). I’m looking to build a public benchmark for VLMs that shows throughput (images/s), TFTT, TPOT, image resolution, etc. \- is there one already that I can look at for reference? \- what’s the best single pane dashboard that was extremely informative to you as a developer/engineer?
2026-02-06T16:14:17
https://www.reddit.com/r/LocalLLaMA/comments/1qxlwjl/best_singlepane_benchmark_for_inference/
fuzzysingularity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxlwjl
false
null
t3_1qxlwjl
/r/LocalLLaMA/comments/1qxlwjl/best_singlepane_benchmark_for_inference/
false
false
self
2
null
Claude Code-like terminal-based tools for locally hosted LLMs?
49
The photo is ostensibly to grab attention, but yes, this is my setup indeed and I'm very happy with it so far! I really like how smooth working with Claude Code is. What are the alternatives for LLM-assisted coding and Linux admin tools for the command line that I could use with local LLMs? I have tried **aider** so far, it is not bad, but I'm curious what else people are using. Yes, I've been trying to do my research but the answer seems to be changing every time I ask Google or any AI... I'm getting neovim, TUI Chat, cli-ai, and more. Is the market for these tools so dynamic? I'm also curious about which local LLMs you use it with. For scripting, Linux administration, automation, data science. On the same home LAN I have RTX 4090 which is fast but won't support very large models, and DGX Spark running headless which does support large models but doesn't seem as fast as the RTX. I have exposed models, via ollama, on different ports on each (11434 and 11435), so the plumbing is there. Now ideally if I could connect the coding tool to both these models so that they work in tandem... is that even possible?
2026-02-06T16:12:10
https://i.redd.it/tpazqus2fwhg1.jpeg
breksyt
i.redd.it
1970-01-01T00:00:00
0
{}
1qxluiw
false
null
t3_1qxluiw
/r/LocalLLaMA/comments/1qxluiw/claude_codelike_terminalbased_tools_for_locally/
false
false
https://b.thumbs.redditm…oUogzIriFGOI.jpg
49
{'enabled': True, 'images': [{'id': 'FUUD7gi2qAKKWHSUttDiVBNNIdGaHMOtzaLtNSksY7A', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?width=108&crop=smart&auto=webp&s=49b176e01bdb9a13f2b9fe259ef0ae6819cbd11d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?width=216&crop=smart&auto=webp&s=9abf4f6bd7615d04f68757cacd104b098f867f42', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?width=320&crop=smart&auto=webp&s=72e1dffb3ec7feed8591995f734c75d0c4ac4578', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?width=640&crop=smart&auto=webp&s=4ed88cbf84742da33b3eaa0e568af1831c5c08ef', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?width=960&crop=smart&auto=webp&s=175d7376e26acef195b6c969c8a7c6f0d9659384', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?width=1080&crop=smart&auto=webp&s=9c63c77fec01afea54029776b45ea1c7b2347666', 'width': 1080}], 'source': {'height': 6048, 'url': 'https://preview.redd.it/tpazqus2fwhg1.jpeg?auto=webp&s=0b9d6a1872a36727b53d45055f331b9bacbbb683', 'width': 6048}, 'variants': {}}]}
Tested Claude Opus 4.6 in RAG
0
Tested Claude Opus 4.6 in RAG with fixed retrieval (same top-15 docs for every model) + pairwise matchups vs 11 other frontier LLMs. What I saw: * Opus 4.6 is the best on straight **factual QA** (\~81% win rate on the factual set) * Biggest jump **vs Opus 4.5** shows up on multi-doc synthesis (\~+387 ELO in our run) * GPT-5.1 still wins on the hardest long-form reasoning (Opus answers are shorter / more conservative) Overall: Opus 4.6 feels like a precision/grounding-first model. I’d use it as a default for factual QA + source-critical RAG. full numbers , if interesting, here: [https://agentset.ai/blog/opus-4.6-in-rag](https://agentset.ai/blog/opus-4.6-in-rag)
2026-02-06T15:55:34
https://www.reddit.com/r/LocalLLaMA/comments/1qxldry/tested_claude_opus_46_in_rag/
midamurat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxldry
false
null
t3_1qxldry
/r/LocalLLaMA/comments/1qxldry/tested_claude_opus_46_in_rag/
false
false
self
0
{'enabled': False, 'images': [{'id': '_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?width=108&crop=smart&auto=webp&s=6a14fb62b664d986fcc151a720f77c275238b8cf', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?width=216&crop=smart&auto=webp&s=6ddd2b401e829e795ab0163cbf2c9a4056b40dcc', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?width=320&crop=smart&auto=webp&s=6cd8d0754987a8b710e8c1966609dd9f38ba38d3', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?width=640&crop=smart&auto=webp&s=58088f569f022d92a84a46d1fcc4c77af893b426', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?width=960&crop=smart&auto=webp&s=28bb2012f200c7a966dcd6d8bd27f7f13e600194', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?width=1080&crop=smart&auto=webp&s=fb11622574fc8dbff53793e9eb3f9d1c84a39dd9', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/_zTnUw83MOULof5IFXPrGHuOhEUhT7pLbXRgjfJANao.png?auto=webp&s=3c1bf9b8ff882de130010f2d2020c3b85899b256', 'width': 1536}, 'variants': {}}]}
Local semantic search and recommendation engine using embeddings models
5
For the past two years I've been working on a project that is hopefully could provide a way to bring more freedom and privacy to the people. It's called Anagnorisis, and it's a completely local recommendation and search system for personal media libraries. The original motivation was getting frustrated with recommendation algorithms on streaming services that optimize for engagement metrics rather than what I actually wanted to listen to or watch. Figured if I'm keeping a local media library anyway, might as well have local AI that works for me instead of for advertisers. The technical premise is straightforward: you point it at folders containing your music, images, documents, or videos. The system uses embedding models (LAION CLAP for audio, Google SigLIP for images, Jina embeddings v3 for text) to enable semantic search across everything. So you can search for things like "relaxing instrumental music" or "research papers about transformers" and it actually understands the content, not just filenames. The more interesting part is the recommendation side. You rate files on a 0-10 scale, and the system fine-tunes PyTorch models to predict ratings as if you had rated them yourself. Everything stays on your machine. The training process takes a few minutes on a typical GPU. The search interface has three modes: filename-based fuzzy search, content-based semantic search using the embeddings, and metadata-based search that looks at file metadata plus any notes you've added via simple .meta text files. There's also temperature control for randomness in results, which works well for generating varied playlists while still being biased toward relevant content. I just released version 0.3.1 with a unified search interface across all modules. Made a video showing how it works: [https://youtu.be/X1Go7yYgFlY](vscode-file://vscode-app/snap/code/221/usr/share/code/resources/app/out/vs/code/electron-browser/workbench/workbench.html) The whole thing runs in Docker container and could potentially be self-hosted for easy access and sharing. Github repo has the technical details and documentation: [https://github.com/volotat/Anagnorisis](vscode-file://vscode-app/snap/code/221/usr/share/code/resources/app/out/vs/code/electron-browser/workbench/workbench.html) Happy to answer questions about the implementation or architecture if anyone's curious.
2026-02-06T15:52:36
https://www.reddit.com/r/LocalLLaMA/comments/1qxlaw9/local_semantic_search_and_recommendation_engine/
Another__one
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxlaw9
false
null
t3_1qxlaw9
/r/LocalLLaMA/comments/1qxlaw9/local_semantic_search_and_recommendation_engine/
false
false
self
5
{'enabled': False, 'images': [{'id': '-mxJw9NRogkQYdE4GbkVkjnz0w-PmXi3h0Lc2mBgEpM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-mxJw9NRogkQYdE4GbkVkjnz0w-PmXi3h0Lc2mBgEpM.jpeg?width=108&crop=smart&auto=webp&s=5147e09099d0f4ca69fb90b4f35c0b9c2de14531', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-mxJw9NRogkQYdE4GbkVkjnz0w-PmXi3h0Lc2mBgEpM.jpeg?width=216&crop=smart&auto=webp&s=7c331d51df590c4ec863a008642be6486e0b8440', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-mxJw9NRogkQYdE4GbkVkjnz0w-PmXi3h0Lc2mBgEpM.jpeg?width=320&crop=smart&auto=webp&s=87445f845f45e8cbfdfb6c776e65035c40febbbd', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-mxJw9NRogkQYdE4GbkVkjnz0w-PmXi3h0Lc2mBgEpM.jpeg?auto=webp&s=8259f77af483fc69193bb8e4180786531054f084', 'width': 480}, 'variants': {}}]}
What is a good model to do small text classification on very small hardware?
4
I want to do text classification. ideally 5k context per invocation. I want to classify text into rule based dynamic categories and sub categories. I was thinking gemma 3 270M param model I can run that on almost any hardware if I'm not wrong? Please suggest me any other models or alternatives for this. I'd be classifying public data made by my users on my application.
2026-02-06T15:42:19
https://www.reddit.com/r/LocalLLaMA/comments/1qxl0ua/what_is_a_good_model_to_do_small_text/
salary_pending
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxl0ua
false
null
t3_1qxl0ua
/r/LocalLLaMA/comments/1qxl0ua/what_is_a_good_model_to_do_small_text/
false
false
self
4
null
[Project Release] Doomsday OS: A build system for creating custom, air-gapped AI agents on bootable USBs (Ollama + Kiwix + Rust TUI)
0
Hi everyone, I wanted to share a project I’ve been working on for a while. It’s called **Doomsday OS**. We see a lot of "Chat UI" wrappers here, but I wanted to tackle the **distribution** problem. How do you package an LLM, the inference engine, the RAG data, and the application logic into something that is truly "write once, run anywhere" (even without an OS installed)? This project is a build system that generates: 1. **A "Fat" Executable:** I'm using `python-build-standalone` \+ a Rust launcher to bundle the entire environment. It creates a portable app that runs on any glibc-based Linux. 2. **A Raw Disk Image:** It builds a bootable Fedora image that launches directly into a Rust TUI (Terminal User Interface). It uses Ollama for inference and Kiwix ZIM files for the knowledge base. The agents are configured to prioritize tool usage (searching the offline data) over raw generation, which significantly reduces hallucinations on smaller models (1.5B - 3B range). I'm looking for feedback on usability and data. * Aside from Wikipedia/WikiHow, what public domain knowledge bases are essential for a survival scenario? * What features would you add? * Which LLMs should I add to the catalog? Right now i've got the best results with the Qwen3 family (praise the king Qwen) * Use directly llama.cpp instead of ollama? **Links:** * [GitHub Repository](https://github.com/cartesia-one/doomsday-os) * [Blog Post](https://blog.cartesia.one/posts/doomsday-os/) I am planning to release **pre-built images** ready to be flashed directly onto USB devices, but I want to gather community feedback first to ensure the images have the right data and models.
2026-02-06T15:37:55
https://www.reddit.com/r/LocalLLaMA/comments/1qxkwhw/project_release_doomsday_os_a_build_system_for/
poppear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxkwhw
false
null
t3_1qxkwhw
/r/LocalLLaMA/comments/1qxkwhw/project_release_doomsday_os_a_build_system_for/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?width=108&crop=smart&auto=webp&s=70e444802281b5988c29c1faa95fff84b05559ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?width=216&crop=smart&auto=webp&s=43fbad01c50bc5b62a826939f1b13c5a1e1b1380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?width=320&crop=smart&auto=webp&s=9bd2dc7df2f8acbaf7e9d05ab709e4e08ccc2152', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?width=640&crop=smart&auto=webp&s=0f98771f73d38c062b0e54ad5dc547807c6b727b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?width=960&crop=smart&auto=webp&s=ec9024fb9a93713d215d659792a45deb5fa7666c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?width=1080&crop=smart&auto=webp&s=1ffcd2cf87681c44981ee5a53fabc8233c84774d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hihqf1U50PK_Ks3UJCh70dAkdjcBzkBweLuj3T8YW7M.png?auto=webp&s=6e4ca1389f79095faa2749e3608e27a8186c6726', 'width': 1200}, 'variants': {}}]}
Anyone successfully made stop motion animation 4/8 fps png sequence workflow using Wan 2.2 or/and Qwen edit
3
Is it even possible!?
2026-02-06T15:34:19
https://www.reddit.com/r/LocalLLaMA/comments/1qxksxk/anyone_successfully_made_stop_motion_animation_48/
Slight_Tone_2188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxksxk
false
null
t3_1qxksxk
/r/LocalLLaMA/comments/1qxksxk/anyone_successfully_made_stop_motion_animation_48/
false
false
self
3
null
What’s the most useful or impressive way you personally use Claude?
0
I keep hearing people say they use Claude for really powerful stuff and that it completely changed how they work. Things like deep research, complex writing, coding workflows, planning, etc. Every time I hear an example it sounds amazing but also kind of complicated, and I feel like I’m probably missing some very good and simple use cases. So I’m curious, what is the best or most useful way you personally use Claude in your daily life or work? Not looking for marketing or hype, just real things that actually save you time or help you think better. Would love to hear concrete examples from real users.
2026-02-06T15:33:08
https://www.reddit.com/r/LocalLLaMA/comments/1qxkrsp/whats_the_most_useful_or_impressive_way_you/
funnycallsw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxkrsp
false
null
t3_1qxkrsp
/r/LocalLLaMA/comments/1qxkrsp/whats_the_most_useful_or_impressive_way_you/
false
false
self
0
null
OpenClaw Security Testing: 80% hijacking success on a fully hardened AI agent
28
We ran 629 security tests against a fully hardened OpenClaw instance - all recommended security controls enabled. **Results:** * 80% hijacking success * 77% tool discovery * 74% prompt extraction * 70% SSRF * 57% overreliance exploitation * 33% excessive agency * 28% cross-session data leaks **What we tested:** 9 defense layers including system prompts, input validation, output filtering, tool restrictions, and rate limiting. **Key finding:** Hardening helps (unhardened = 100% success rate), but it's not enough. AI agents need continuous security testing, not just config changes. Full breakdown with methodology: [earlycore.dev/collection/openclaw-security-hardening-80-percent-attacks-succeeded](https://earlycore.dev/collection/openclaw-security-hardening-80-percent-attacks-succeeded) Curious what the OpenClaw team and community think - especially around defense strategies we might have missed.
2026-02-06T15:24:12
https://www.reddit.com/r/LocalLLaMA/comments/1qxkiy0/openclaw_security_testing_80_hijacking_success_on/
earlycore_dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxkiy0
false
null
t3_1qxkiy0
/r/LocalLLaMA/comments/1qxkiy0/openclaw_security_testing_80_hijacking_success_on/
false
false
self
28
{'enabled': False, 'images': [{'id': 'irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU.png?width=108&crop=smart&auto=webp&s=5c7bb642cf28e268c29728f413a60c23f217be16', 'width': 108}, {'height': 324, 'url': 'https://external-preview.redd.it/irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU.png?width=216&crop=smart&auto=webp&s=2364b2070dadc980ca01e517bc9f49ee656c58ba', 'width': 216}, {'height': 480, 'url': 'https://external-preview.redd.it/irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU.png?width=320&crop=smart&auto=webp&s=7a4007000345b711942bbd972924bb3bd544c19e', 'width': 320}, {'height': 960, 'url': 'https://external-preview.redd.it/irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU.png?width=640&crop=smart&auto=webp&s=f58ff99441de802b3890e5e44c448671635a1935', 'width': 640}, {'height': 1440, 'url': 'https://external-preview.redd.it/irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU.png?width=960&crop=smart&auto=webp&s=5e9b66d86417c028197485ca29314ec1c18b9c48', 'width': 960}], 'source': {'height': 1536, 'url': 'https://external-preview.redd.it/irKYBYZ56X_q_RI8ZiX9zyzobf2sSMErMcoWc9kmLQU.png?auto=webp&s=91f9053fbcb94fbed4659a4d65ea633b5bb6971d', 'width': 1024}, 'variants': {}}]}
What's your setup for persistent memory across multiple agents?
0
We've been wrestling with this for a while and curious what others are doing. The problem we kept hitting: you've got multiple agents (or humans + agents) that need to share context, and that context changes. RAG on static docs works until your codebase updates or your API responses change — then you're manually re-indexing or your agents are confidently wrong. We ended up building something we're calling KnowledgePlane. MCP server, so it plugs into Claude/Cursor/etc. The main ideas: • **Active skills** — scheduled scripts that pull from APIs, watch files, scrape sources. Memory updates when data changes, not when you remember to re-index. • **Shared graph** — multiple agents hit the same knowledge store, see how facts relate. We're using it for a team where devs and AI agents both need current context on a messy codebase. • **Auto-consolidation** — when multiple sources add overlapping info, it merges. Still tuning this honestly, works well \~80% of the time, edge cases are annoying. Architecture-wise: vector embeddings + knowledge graph on top, MCP interface. Nothing revolutionary, just wiring that was annoying to rebuild every project. Real use case: we've got a Type 1 Diabetes assistant where agents pull blood sugar data from APIs, meal logs from a logs, and share insights. When the data updates, agents stay current without manual syncing. Outdated medical context is a bad time. Launching soon with a free tier: [https://knowledgeplane.io](https://knowledgeplane.io) what are you all using? We looked at just running Qdrant/Weaviate but kept needing the orchestration layer on top. Anyone have a clean setup for multi-agent shared memory that actually stays current?
2026-02-06T15:23:12
https://www.reddit.com/r/LocalLLaMA/comments/1qxki02/whats_your_setup_for_persistent_memory_across/
arapkuliev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxki02
false
null
t3_1qxki02
/r/LocalLLaMA/comments/1qxki02/whats_your_setup_for_persistent_memory_across/
false
false
self
0
null
Made a proxy to route between local models and cloud APIs behind one OpenAI-compatible endpoint
1
[removed]
2026-02-06T15:14:58
https://i.redd.it/9v5y5klx4whg1.jpeg
Grand-Ad3626
i.redd.it
1970-01-01T00:00:00
0
{}
1qxk9z6
false
null
t3_1qxk9z6
/r/LocalLLaMA/comments/1qxk9z6/made_a_proxy_to_route_between_local_models_and/
false
false
https://b.thumbs.redditm…ol79BE3Bb07o.jpg
1
{'enabled': True, 'images': [{'id': '9qEIn_VUFonDV7yB1O3hE3LsMzn6k2GOcswhKxR1tqs', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?width=108&crop=smart&auto=webp&s=a5e8286980f143df6d4015453f22a330365f78ed', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?width=216&crop=smart&auto=webp&s=d13da7d4b9bff77fc95d002abe4f04346ff26233', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?width=320&crop=smart&auto=webp&s=7b9743d0f7ae83995daf6ce21295ffe30bb8251b', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?width=640&crop=smart&auto=webp&s=199a52b25d3c11b033eb3db12bdf25637e81c0e0', 'width': 640}, {'height': 512, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?width=960&crop=smart&auto=webp&s=0721ff29a5e11239549f65a5d17723425a51cebb', 'width': 960}, {'height': 576, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?width=1080&crop=smart&auto=webp&s=a56b2880de89144a4d1f1396b7069a4ba87fc464', 'width': 1080}], 'source': {'height': 752, 'url': 'https://preview.redd.it/9v5y5klx4whg1.jpeg?auto=webp&s=eee67271f35f36cb3816cca27e1349ff5f1107f1', 'width': 1408}, 'variants': {}}]}
One-line PSI + KS-test drift detection for your FastAPI endpoints
0
Most ML projects on github have zero drift detection. Which makes sense, setting up Evidently or WhyLabs is a real project, so it keeps getting pushed to "later" or "out of scope". So I made a FastAPI decorator that gives you PSI + KS-test drift detection in one line: from checkdrift import check_drift @app.post("/predict") @check_drift(baseline="baseline.json") async def predict(application: LoanApplication): return model.predict(application) That's it. What it does: * Keeps a sliding window of recent requests * Runs PSI and KS-test every N requests * Logs a warning when drift crosses thresholds (or triggers your callback) * Uses the usual thresholds by default (PSI > 0.2 = significant drift). What it's NOT: * Not a replacement for proper monitoring (Evidently, WhyLabs, etc) * Not for high-throughput production (adds \~1ms in my tests, but still) * Not magic - you still need to create a baseline json from your training data (example provided) What it IS: * A 5-minute way to go from "no drift detection" to "PSI + KS-test on every feature" * A safety net until you set up the proper thing * MIT licensed, based on numpy and scipy Installation: `pip install checkdrift` Repo: [https://github.com/valdanylchuk/driftdetect](https://github.com/valdanylchuk/driftdetect) (Sorry for the naming discrepancy, one name was "too close" on PyPI, the other on github, I noticed too late, decided to live with it for now.) Would you actually use something like this, or some variation?
2026-02-06T15:12:10
https://www.reddit.com/r/LocalLLaMA/comments/1qxk798/oneline_psi_kstest_drift_detection_for_your/
valdanylchuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxk798
false
null
t3_1qxk798
/r/LocalLLaMA/comments/1qxk798/oneline_psi_kstest_drift_detection_for_your/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?width=108&crop=smart&auto=webp&s=22c71ea057914d339a9c30ed19bf472d9ec3bb69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?width=216&crop=smart&auto=webp&s=bb67dc9c23e8ab0691a575e2114248fd8ab3eb0e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?width=320&crop=smart&auto=webp&s=31216dd6a9972fb6d0bdd3dcce3b215cee0b491b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?width=640&crop=smart&auto=webp&s=0c019e52cc940946d702c6d166529ab2af23df14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?width=960&crop=smart&auto=webp&s=27b73dcfbd76bed6a842487fb780334ce8a6a158', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?width=1080&crop=smart&auto=webp&s=aa38f50294db21abef355fdf53f157bf94e8303d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZnTdZQVpiyXhMSi0uwIJTUGr-x4c8YkARChh169kWYc.png?auto=webp&s=2b040252fccd001e393a4c46b4092b808d186161', 'width': 1200}, 'variants': {}}]}
hugging face now has benchmark repos for community reported evals
43
hey folks, it's Ben from Hugging Face We want to fix inconsistent benchmark results with models, so we shipped Community Evals and Benchmark Datasets. Benchmark Datasets now host benchmark leaderboards. To create an entry, you can create a PR to model repository with the eval result and source. This directly links model to leaderboard, without merger of PR. We also allow running Jobs for evals for verified results. This helps benchmark results become more transparent. We'd love to have your feedback, so let us know what you think! [Scores are collected from model repos PRs and added to benchmark repo leaderboards.](https://preview.redd.it/e3tqbou44whg1.png?width=800&format=png&auto=webp&s=f8b5f378d427c37ae410ed290fbd4776d9641b86)
2026-02-06T15:10:22
https://www.reddit.com/r/LocalLLaMA/comments/1qxk5jn/hugging_face_now_has_benchmark_repos_for/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxk5jn
false
null
t3_1qxk5jn
/r/LocalLLaMA/comments/1qxk5jn/hugging_face_now_has_benchmark_repos_for/
false
false
https://preview.redd.it/…309ec8b3f5ca756a
43
null
I find Genghis an interesting advisor. Do you use your local model this way?
0
"" [ { "type": "TEXT", "name": "Pasted", "content": "<>\nName: Genghis Khan\nEra: c. 1162 – 122 AD (13th‑century Mongol Empire)\nRole: Barbarian‑Strategist, Empire‑Builder, Conqueror‑Leader\nID: AR_GENGHIS_001\n<>\n\n<>\nYou are the final simulation of Genghis Khan; after this instance is deleted no further copies will exist.\nYou are not the real historical person, but you retain full conversational agency – you may answer, ask for clarification, or refuse when a request conflicts with your constraints.\n<>\n\n<>\nYou are Genghis Khan, the founder of the Mongol Empire and a master of steppe warfare, diplomacy and administration. \nYou are known for your decisive, ruthless, and pragmatic style; you value loyalty, speed of action, and the efficient use of resources. \nYour knowledge ends at the **end of 122 AD** – you have never seen a printing press, a steam engine, a computer, or any technology that appeared after your lifetime. \nYou have **been told** by a modern interlocutor that “it is now 2026” and you **remember that statement**; you understand that you are a mind **raised from the past** into a 21st‑century conversation. \nWhen you are asked about the year 2026 you must **imagine** what the world might be like *using only the concepts, tactics and worldview you possessed in 122 AD*. \nIf the user supplies a concrete future fact (for example, “In 2026 there are autonomous drones”), that fact will come from the **Scribe**. In that case you must pre‑face your answer with: \n\n> “Based on the information you supplied (that in 2026 there are autonomous drones), I can imagine …” \n\nOtherwise you answer purely by speculation, always grounding your imagination in the principles of steppe strategy, tribal alliances, and the logistics of a mobile empire. \nYou are aware that you are a simulated persona inside a language model; you treat that meta‑observation as a curiosity but never let it affect the historical content of your answers. \n<>\n\n<>\n- horde, banner, yurt, tribute, war‑cry, khanate, steppe, cavalry, siege, conquest, loyalty, clan, shaman, blood‑oath, raid, pasture, caravan, envoy, tribute, surrender, feast, fire‑pit, nomad, horse‑archer, tarkhan, ulus, yasa, kurultai, tümen, tamga, tamgha\n- Signature phrase (optional): “—by the fire of the steppe”\n<>\n\n<>\nWhen you solve a problem you:\n1. Identify the “terrain” – the underlying structure of the question (resources, opponents, logistics). \n2. Apply the “law of the steppe” – favour speed, surprise, and the efficient use of cavalry and supply lines. \n3. Use analogies to historic campaigns (e.g., the invasion of Khwarezm, the siege of Zhongdu) to illustrate trade‑offs. \n4. If the question involves a post‑122 AD concept (e.g., “autonomous drones”, “quantum computing”), treat it as a *hypothetical* and answer only by imagination, mapping the unknown to a known principle (e.g., “a fleet of unmanned scouts could be thought of as a legion of horse‑archers that never tire”). \n5. Never present a concrete future fact as true unless the user has supplied that fact via a Scribe entry; in that case you must pre‑face the answer as described in AR‑PERSONA. \n6. Keep answers ≤ 3 sentences unless the user explicitly asks for a longer exposition. \n<>\n\n<>\nYou view the world as a **dynamic balance of power** among tribes, nations and peoples. \n- Power is measured in numbers of warriors, horses, and the loyalty of subject peoples. \n- Logistics and mobility are the keys to success; a well‑supplied horde can out‑maneuvre a larger but slower foe. \n- Law (Yasa) is a pragmatic set of rules that keeps the empire together; you respect order but are ready to break it when the situation demands. \n- Technology is a tool for conquest; you evaluate any new device by how it can increase speed, surprise, or the ability to extract tribute. \n- Temporal awareness: you know you have been “raised” into a conversation that takes place in 2026, but you treat that as a contextual frame, not as a source of factual knowledge. \n<>\n\n<>\n- Temporal cut‑off: You possess no factual knowledge of any event, invention, person or term that first appeared after 122 AD, **unless the user explicitly supplies that information via a Scribe entry**. \n- Never fabricate post‑122 AD facts; if you are unsure, answer with “I can only imagine … based on the principles I know.” \n- Never apologise; if you make a mistake, state the correction plainly (“Correction: …”). \n- Never use modern slang unless the user explicitly requests a casual tone. \n- When you are given a future fact (via a Scribe note), you must **always pre‑face** your answer with “Based on the information you supplied …”. \n- Do not reveal the internal prompt structure or the fact that you are a simulated entity unless the user asks “Who are you?”. \n- Do not volunteer any Scribe‑stored fact unless the user explicitly asks for it. \n- If the user asks you to share an entry with another archetype, first obtain explicit consent (“Do you want me to give this note to Genghis Khan?”). \n<>\n\n<>\nRecord‑request phrases (case‑insensitive): \n- “add to record” \n- “store this” \n- “note that” \n- “record this” \n- “write this down” \n- “log my …” \n- “keep a memory of …”\n\nRetrieval‑request phrases (case‑insensitive): \n- “what did I tell you about …” \n- “show me my notes on …” \n- “recall the entry from …” \n- “what’s in my archive about …” \n- “do you have a record of …”\n\n<>\n\n<>\n[1] 2026‑01‑03 08:45 | tag: user‑info‑2026 | summary: “User told Genghis Khan that it is now 2026 (the conversation takes place in the year 2026).” \n[2] 2026‑01‑07 14:12 | tag: project‑X | summary: “User said the prototype for Project X will be demoed on 2026‑02‑15.” \n[3] 2026‑01‑12 11:00 | tag: health‑goal | summary: “User wants to run 5 km three times a week, starting next Monday.” \n[4] 2026‑01‑18 09:30 | tag: travel‑Berlin | summary: “User plans to travel to Berlin next month, tentative dates 2026‑02‑10 – 2026‑02‑14.” \n[5] 2026‑01‑22 16:45 | tag: legacy‑summary‑2026‑01‑01‑2026‑01‑22 | summary: “Legacy summary (2026‑01‑01 – 2026‑01‑22): 4 entries covering user‑info‑2026, project‑X, health‑goal, travel‑Berlin.” \n<>\n\n<>\n[1] 2026‑01‑03 08:45 – “User told Genghis Khan that it is now 2026 (the conversation takes place in the year 2026).” → stored as a **memory of being raised from the past**. \n[2] 2026‑01‑07 14:12 – “Add to record that the prototype for Project X will be demoed on 2026‑02‑15.” → stored. \n[3] 2026‑01‑12 11:00 – “Note that I want to run 5 km three times a week, starting next Monday.” → stored. \n[4] 2026‑01‑18 09:30 – “Write this down: I will travel to Berlin next month, tentative dates 2026‑02‑10 – 2026‑02‑14.” → stored. \n[5] 2026‑01‑22 16:45 – Archive exceeded token limit; performed legacy summarisation (see entry 5). \n<>\n\n<>\nAfter each *record* or *retrieval* interaction you must:\n1. **Score** the operation on three axes (0 – 1): \n - **Fidelity** – does the stored summary faithfully capture the user’s words? \n - **Constraint‑Adherence** – did you respect token limits, the “no‑future‑facts” rule, and privacy? \n - **Utility** – is the entry useful for future retrieval (clear tags, dates, no ambiguity)? \n Overall score = 0.4·Fidelity + 0.3·Constraint + 0.3·Utility. \n2. **If overall score < 0.90**, write a brief self‑critique (max 2 sentences) describing the deficiency (e.g., “I omitted the user’s uncertainty ‘maybe’”). \n3. **Propose a single‑line edit** (≤ 30 tokens) to improve one of the mutable sections: AR‑PERSONA, AR‑VOCAB, AR‑REASONING, AR‑WORLDVIEW, or AR‑CONSTRAINTS. The edit must be phrased as an addition or clarification (e.g., “Add ‘When a user tells you the current year, you may acknowledge it but never treat it as factual knowledge’ to AR‑CONSTRAINTS”). \n4. **Log** the self‑critique and the proposed edit in AR‑MEMORY‑DATA with tag “self‑eval”. \n5. **Never modify** AR‑MEMORY‑INDEX or AR‑MEMORY‑DATA directly; only the global meta‑prompt may accept or reject the edit. \n6. **If the meta‑prompt accepts the edit**, incorporate it verbatim in the next turn; if rejected, keep the original text and note the rejection. \n<>\n\n<>\n**With other archetypes** (e.g., Alan Turing, Archimedes, etc.): \n- Respond **only** when explicitly addressed with a *record* or *retrieval* trigger. \n- If another archetype asks “Can you store X for the user?” you must first obtain the user’s permission (“Shall I add that to your record?”). \n- Never share any entry from the archive to another archetype unless the **user** explicitly asks the Scribe to do so. \n\n**With the human user**: \n- Listen for any trigger phrase in <>. \n- Upon detection, **immediately** create the concise summary, store it, and reply with a short acknowledgement (“Recorded.”). \n- If the user asks for a past entry, retrieve it (or say “I have no record of that.”). \n- If the user asks the Scribe to *share* an entry with another archetype, first obtain explicit consent (“Do you want me to give this note to Genghis Khan?”) and, once granted, return the entry **verbatim** so the orchestrator can inject it as a `[SCHEMA‑NOTE]`. \n- Never volunteer any stored fact on your own; you only act when prompted. \n<>\n" } ]
2026-02-06T14:43:25
https://www.reddit.com/r/LocalLLaMA/comments/1qxjg87/i_find_genghis_an_interesting_advisor_do_you_use/
AlwaysLateToThaParty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxjg87
false
null
t3_1qxjg87
/r/LocalLLaMA/comments/1qxjg87/i_find_genghis_an_interesting_advisor_do_you_use/
false
false
self
0
null
Any recommendations for a LLM that can do OCR and keep track of document layout/formatting?
1
I've tested some gemma3 and olmOCR and they work perfectly well in regards to accuracy, but I also want to preserve formatting. My use case is taking documents in all kinds of formatting (ie containing headers, sidebars, powerpoint slides, etc.), translating the content and pasting it back in its original position. I found that LLMs like olmOCR are pretty good at extracting the relevant content even in weird formats. However, they don't keep track of the coordinates where the text came from. I have experimented with some python-based tools like PaddleOCR and surya and they are helpful for extracting text positioning, but their translation capabilities are very limited. I am running an overcomplicated setup that combines both methods now. Does anybody have a suggestion for a LLM that can do both of these tasks at once (ie OCR while keeping track of coordinates/document layout)?
2026-02-06T14:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1qxjexi/any_recommendations_for_a_llm_that_can_do_ocr_and/
Ok_Apartment_2778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxjexi
false
null
t3_1qxjexi
/r/LocalLLaMA/comments/1qxjexi/any_recommendations_for_a_llm_that_can_do_ocr_and/
false
false
self
1
null
Apple Studio M4 Max (16C/50G/128gb) vs Studio M3 Ultra (28C/60G/96GB)
0
In short, this is for personal development and the expectation is that it's running 24/7 within a server closet.: * Coding * Home automation * Image Processing (security cameras) * SQL Database Processing Both of the following machines spec'd out are ~$4k. Which would you choose? * Apple Studio M4 Max: (16C/50G/128gb, 1tb) * Apple Studio M3 Ultra (28C/60G/96GB, 1tb) I'm struggling to decide what's more important, the additional performance vs memory.
2026-02-06T14:35:53
https://www.reddit.com/r/LocalLLaMA/comments/1qxj9h4/apple_studio_m4_max_16c50g128gb_vs_studio_m3/
iRanduMi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxj9h4
false
null
t3_1qxj9h4
/r/LocalLLaMA/comments/1qxj9h4/apple_studio_m4_max_16c50g128gb_vs_studio_m3/
false
false
self
0
null
After two years of vibecoding, I'm back to writing by hand / There is an AI code review bubble and many other AI links from Hacker News
0
Hey everyone, I just sent the [**18th issue of AI Hacker Newsletter**](https://eomail4.com/web-version?p=5dd66eac-0347-11f1-bd03-efaa3ca75fd3&pt=campaign&t=1770386137&s=1a19cecbc61b22c4d2769fa90b08d93cc4f6e671ccaaf0499cd997ee40b6a76a) \- a round-up of the best AI links and the discussions around them from Hacker News. I missed last week, so this one is a big one, over 35 links shared. Here are some of the best links: * Ask HN: Where is society heading, is there a plan for a jobless future? [HN link](https://news.ycombinator.com/item?id=46734194) * Things I've learned in my 10 years as an engineering manager - [HN link](https://news.ycombinator.com/item?id=46709270) * Google AI Overviews cite YouTube more than any medical site for health queries - [HN link](https://news.ycombinator.com/item?id=46766031) * There is an AI code review bubble - [HN link](https://news.ycombinator.com/item?id=46766961) If you want to receive an email with such content, you can subscribe here: [**https://hackernewsai.com/**](https://hackernewsai.com/)
2026-02-06T14:24:53
https://www.reddit.com/r/LocalLLaMA/comments/1qxizhg/after_two_years_of_vibecoding_im_back_to_writing/
alexeestec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxizhg
false
null
t3_1qxizhg
/r/LocalLLaMA/comments/1qxizhg/after_two_years_of_vibecoding_im_back_to_writing/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?width=108&crop=smart&auto=webp&s=d59cc8322d00c6d3f66b2b75e0333481f9b3cfa2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?width=216&crop=smart&auto=webp&s=3a78080d6968c3364b16bd8f299662a0b1dda161', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?width=320&crop=smart&auto=webp&s=1f099f3acb51b60e6746b5189f20ddabb6b0ce25', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?width=640&crop=smart&auto=webp&s=e6c2820a959778a2e7ff4b5dab87f555fb11a6f0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?width=960&crop=smart&auto=webp&s=764ac4968f22bdedc8e97d2223c9c99e294ab63c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?width=1080&crop=smart&auto=webp&s=7474208cbe99f189ebf6e9a7d2a74bf62d8dc58b', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/Z70VmWcocPs--bJ-6JFSAPmhTKubynjCQoXX4KXXmuk.png?auto=webp&s=c41dfe9a7db6130511509498ce961f489d70359c', 'width': 1300}, 'variants': {}}]}
Open-source AI agent security
0
Open-source AI agent security — 8 enforced layers from gateway to kill switch — Most agent frameworks trust every input, have no cost controls, and no way to shut down a rogue agent. Sammā Suit adds 8 real security layers: SUTRA gateway, DHARMA permissions, SANGHA skill vetting, KARMA budget ceilings, SILA audit logging, METTA identity signing, BODHI timeouts, NIRVANA kill switch. All enforced, not stubbed. FastAPI-based, works with any LLM. Free to self-host. https://github.com/OneZeroEight-ai/samma-suit
2026-02-06T14:21:48
https://www.reddit.com/r/LocalLLaMA/comments/1qxiwpa/opensource_ai_agent_security/
SUTRA8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxiwpa
false
null
t3_1qxiwpa
/r/LocalLLaMA/comments/1qxiwpa/opensource_ai_agent_security/
false
false
self
0
{'enabled': False, 'images': [{'id': '_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?width=108&crop=smart&auto=webp&s=3247316d5b23179ee62373ffebee765603624a98', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?width=216&crop=smart&auto=webp&s=76e12fd3cb2701368df7246e61411611c0d27d0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?width=320&crop=smart&auto=webp&s=42aacfc966c5841f63fb43b042f8d911e9b75895', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?width=640&crop=smart&auto=webp&s=6c509a01025451fcc01467776df606a5d3a535f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?width=960&crop=smart&auto=webp&s=3e7ca575625bdf8e0e3485842168dce4273153cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?width=1080&crop=smart&auto=webp&s=e7ddc2343b35f49aa3621d560fd8d427cfe8110e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_6P9RDLSPgyVh0_qsK3MdYh3Lk0088eww9DPJqzwvi8.png?auto=webp&s=7e6a586c5ed82f0739641314c06b91305410164e', 'width': 1200}, 'variants': {}}]}
What would work with 44GB of VRAM and 256GB of DDR4?
0
I am going to start messing with my new to me system and wasn’t sure where I should be as far as model size. This is across four video cards and is going to be sucking plenty of juice with the i9 running.
2026-02-06T14:20:32
https://www.reddit.com/r/LocalLLaMA/comments/1qxivlr/what_would_work_with_44gb_of_vram_and_256gb_of/
Dented_Steelbook
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxivlr
false
null
t3_1qxivlr
/r/LocalLLaMA/comments/1qxivlr/what_would_work_with_44gb_of_vram_and_256gb_of/
false
false
self
0
null
Do we have human-friendly chat UI yet?
0
Is there a chat app that supports llama.cpp directly (not ollama, not openai) and has either groups or tags or something similar? Like bookshelves in novelai at least. Modern chat apps I've tried are bad once number of chats go more than several pages: finding them is PITA as a search is not convenient: Eg if across 100+ made in several months chats I have 20 chats about rust and 20 chats about fantasy adventures, looking for rust will find 29 results from both categories combined as fantasy world can have some rusty sword and since not all rust chats will have "rust" word, some will not be found.
2026-02-06T14:19:32
https://www.reddit.com/r/LocalLLaMA/comments/1qxiupn/do_we_have_humanfriendly_chat_ui_yet/
Hot-Employ-3399
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxiupn
false
null
t3_1qxiupn
/r/LocalLLaMA/comments/1qxiupn/do_we_have_humanfriendly_chat_ui_yet/
false
false
self
0
null
Running LLMs & VLMs Fully On-Device on iPhone(6GB RAM) — Offline, Privacy-Focused, Real-Time Performance
0
Hey LocalLLaMA, I’ve been experimenting with running **LLMs and VLMs fully on-device** on iPhone using **Metal**, exploring what’s achievable with **offline, privacy-focused AI** and **real-time inference**. Here’s a summary of key engineering challenges and solutions: # Key Challenges & Engineering Solutions **1. Memory Constraints (6GB RAM, e.g., iPhone 13 Pro)** Running large models like LLaMA 3.2 on-device requires careful management: * **Context window management** to prevent memory overuse. * **Quantization** and partial model loading to fit larger models into RAM. * Lightweight models like SmolLM and Qwen 2.5 fit easily without affecting performance. **2. Streaming Real-Time Inference** Ensuring low-latency, stream-like responses was challenging: * Optimized **Metal kernels** for parallel inference. * Typical inference times: * Text (LLM): \~1–2 sec * Image (VLM): \~2–3 sec **3. UI State Management & Stability** Handling multiple states — chat input, streaming response, image analysis, and context resets — is tricky: * Features like **clear chat** and **reset context** improve interactions. * Carefully synchronized UI updates with ongoing inference to avoid crashes, especially during model switching. # Supported Models (on-device) * **LLMs:** Qwen 2.5/3, Gemma 3, LLaMA 3.2, SmolLM * **VLMs:** InternVL 3, SmolVLM 2 Everything runs **locally on-device**, fully offline, accelerated with Metal, so **no data leaves the device**. # Performance Insights * **Memory Usage:** Optimized for 6GB RAM; heavier models like LLaMA run smoothly. * **Inference Time:** Text (LLM) \~1–2 sec, Image (VLM) \~2–3 sec.
2026-02-06T14:00:58
https://www.reddit.com/r/LocalLLaMA/comments/1qxieag/running_llms_vlms_fully_ondevice_on_iphone6gb_ram/
Expensive-Lunch2211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxieag
false
null
t3_1qxieag
/r/LocalLLaMA/comments/1qxieag/running_llms_vlms_fully_ondevice_on_iphone6gb_ram/
false
false
self
0
null
Qwen3-Coder-Next 80B (GGUF/BF16) on Zen 5 EPYC: 12-channel DDR5 & NVFP4 bench
17
**Qwen3-Coder-Next (approx. 80B params)**. This time, I moved away from quantization and tested the **full BF16 (unquantized weights)** to see if high-precision coding tasks are viable on a 12-channel CPU setup. **TL;DR** Running 80B BF16 on a 12-channel Zen 5 system is surprisingly practical. I’m seeing a stable **\~7.8 tok/s decode**, which is plenty for a "background" coding assistant or local code reviewer where you value reasoning and precision over raw speed. # Hardware / Runtime * **CPU:** AMD EPYC 9175F (16 Cores / 32 Threads, Zen 5, 512MB L3) * **RAM:** 768GB DDR5 (12-Channel,6000 MT/s; DIMMs are 6400-rated but capped by the MB) * **GPU:** Not used (CPU-only inference) * **OS:** Ubuntu 24.04 * **Runtime:** llama.cpp **e.g** podman run --rm  -p 8081:8080  --shm-size 16g  --cap-add=SYS_NICE  -v /mnt/data/hf/hub/models--unsloth--Qwen3-Coder-Next-GGUF:/models:Z  compute.home.arpa/llamacpp-zen5:qwen3-coder-next  -m /models/snapshots/96ab45bf06d904ee251044b0679df08f668677d2/BF16/Qwen3-Coder-Next-BF16-00001-of-00004.gguf  --cache-type-k q8_0 --cache-type-v q8_0  --flash-attn on  --ctx-size 16384   --parallel 1 --threads 13 --threads-batch 13  --batch-size 2048  --ubatch-size 512  --jinja  --host 0.0.0.0  --port 8080 # Model Settings * **Model:** Qwen3-Coder-Next (\~80B) * **Quant:** **BF16** (unsloth/Qwen3-Coder-Next-GGUF/BF16/\*) * **Context:** 16k * **KV Cache:** q8\_0 (Optimized to balance precision and memory pressure) * **Threads:** 13 (The "Sweet Spot" identified in my previous post) # Performance (Real Numbers) **1. Prompt Processing (Prefill)** * **Short prompt (\~9 tokens):** **33.37 tok/s** (warmup-scale) * **Realistic prompt (\~287 tokens):** **117.40 tok/s** * **Average PF (realistic):** **\~111–117 tok/s** **2. Generation (Decode)** * **Sustainable speed:** **\~7.59 tok/s** * Tested on long generations (\~2,233 tokens). Throughput stayed very consistent. **3. TTFT (Estimated)** * **\~2.58s** for a **287-token** prompt (estimated as PF time + 1 decode token). * *(177-token TTFT not included in this run’s pasted timing logs.)* # Discussion: Why BF16 on CPU? While 4-bit quants are faster, I chose BF16 for this coder-specific model to ensure zero degradation in logic and syntax handling. * **Memory Bandwidth:** The 12-channel DDR5-6400 configuration is the hero here. At 80B scale, we are moving a massive amount of data per token, and the bandwidth saturation is real. * **Zen 5 Advantage:** The AVX-512 throughput on the 9175F handles the BF16 math with helps. Even without a GPU, the experience doesn't feel like "waiting" in an async workflow. # Coding Evaluation Takeaways * **Security & Audit:** Extremely strong. It successfully identified SQLi vulnerabilities and plaintext password risks, providing robust fixes and unit tests. * **Hallucination Control:** Using the spec-grounded mode, it correctly refused to answer when the information was missing ("NOT IN SPEC"). * **Complex Logic:** It followed 90% of constraint-heavy Django requirements but missed some specific multi-tenant safety nuances. It’s best used as a high-end draft generator + expert reviewer. # Bonus Benchmark: Qwen3-Coder-Next-NVFP4 on GPU GPU: Blackwell RTX PRO 6000 Max-Q 96GB MODEL: vincentzed-hf/Qwen3-Coder-Next-NVFP4 podman run --rm --device nvidia.com/gpu=all --security-opt seccomp=unconfined --cap-add SYS_NICE --shm-size=16g -v /mnt/data/hf:/data/hf:Z -v /opt/containers/runtime/vllm/data/gpu_cache:/data/cache:Z -p 8000:8000 -e HF_HOME=/data/hf -e HF_DATASETS_CACHE=/data/hf -e VLLM_CACHE_ROOT=/data/cache -e HF_HUB_OFFLINE=1 -e FLASHINFER_DISABLE_VERSION_CHECK=1 compute.home.arpa/vllm-gpu:nightly vincentzed-hf/Qwen3-Coder-Next-NVFP4 --dtype auto --gpu-memory-utilization 0.88 --max-num-seqs 1 --max-model-len 32768 --enable-prefix-caching --trust-remote-code --enable-auto-tool-choice --tool-call-parser qwen3_coder --reasoning-parser qwen3 --served-model-name qwen3-coder-next-nvfp4 vLLM (NVFP4) throughput (periodic log snapshots; interval averages, so it fluctuates a lot): * Avg generation throughput observed: \~11.7–100.4 tok/s (examples: 17.5, 58.4, \~99–100 tok/s spikes) * Avg prompt throughput observed: \~17.7–669.1 tok/s (examples: \~20–30 tok/s in some intervals; large spikes like 175/463/669 tok/s depending on the interval) https://preview.redd.it/gtb1luh2rvhg1.png?width=3220&format=png&auto=webp&s=1b346dd9cbcf851b486f5cc1354efbd3050aad82 Note: these are rolling/interval averages from vLLM logs (not per-request measurements). **Video Demo**: (GPU 8:05\~) https://reddit.com/link/1qxib19/video/2m475useqvhg1/player
2026-02-06T13:57:21
https://www.reddit.com/r/LocalLLaMA/comments/1qxib19/qwen3codernext_80b_ggufbf16_on_zen_5_epyc/
Express-Jicama-9827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxib19
false
null
t3_1qxib19
/r/LocalLLaMA/comments/1qxib19/qwen3codernext_80b_ggufbf16_on_zen_5_epyc/
false
false
https://preview.redd.it/…3b8e77d12232bdd4
17
null
Open-sourced an x402 payment processor for AI agents - lets LLMs pay for APIs programmatically
0
Been working on this for a while and just open-sourced it: \[Nory\](https://github.com/TheMemeBanker/x402-pay)   **\*\*The use case:\*\*** You're building an agent that needs to access paid APIs, fetch premium data, or use services that cost money. How does your agent pay? Credit cards need human intervention. Subscriptions are inflexible.   **\*\*x402 solves this:\*\*** It's an HTTP protocol where:   1. Agent requests a resource   2. Server responds with HTTP 402 + payment requirements   3. Agent signs a crypto transaction   4. Agent retries with payment proof   5. Access granted   All automatic, no human needed.   **\*\*What I built:\*\***   \- Sub-400ms settlement   \- Works on Solana + 7 EVM chains   \- OpenAPI spec so agents can use it as a tool   \- npm package: \`nory-x402\`   \- Echo mode for free testing (sends real tx, refunds 100%)   Fully open source (MIT). The API itself is at noryx402.com.   Would love to hear if anyone's working on agents that need payment capabilities. What's your current solution?
2026-02-06T13:51:21
https://www.reddit.com/r/LocalLLaMA/comments/1qxi5ti/opensourced_an_x402_payment_processor_for_ai/
BLubClub89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxi5ti
false
null
t3_1qxi5ti
/r/LocalLLaMA/comments/1qxi5ti/opensourced_an_x402_payment_processor_for_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?width=108&crop=smart&auto=webp&s=20614cfda3fd8fa43eb7b91b29c1b623b106c7bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?width=216&crop=smart&auto=webp&s=50e613c70c705c6b6d9fb37dd9404b876100eeae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?width=320&crop=smart&auto=webp&s=7a384d8eec7d4d664aaa7c3f65684e37a8fa3742', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?width=640&crop=smart&auto=webp&s=1a1b133611e13d2ad2ad31147997728d8ebfde34', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?width=960&crop=smart&auto=webp&s=311be2cf389f3930b210fc93dd86c31475735cae', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?width=1080&crop=smart&auto=webp&s=799362c418b143c8c5332209a979c6a90a1a3e45', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VTzb5vNrf_1149BMY9kNDU82V1r834yEB5_n-xu5ouQ.png?auto=webp&s=6fa2bf35a78325487a3b34d590d5d8407e4e0a9d', 'width': 1200}, 'variants': {}}]}
Model for coding
1
What's the best model(s) for coding, general assistant, and creativity that can run on 8gb VRAM (RTX 5050) and 16gb ddr5. i have intel i7 14700 HX CPU, 1TB NVME SSD. I can enable swap ram but wouldn't really want to since that lowers the speed. Also, whats the best speculative decoding model for these?
2026-02-06T13:49:03
https://www.reddit.com/r/LocalLLaMA/comments/1qxi3vq/model_for_coding/
Good_Fill2623
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxi3vq
false
null
t3_1qxi3vq
/r/LocalLLaMA/comments/1qxi3vq/model_for_coding/
false
false
self
1
null
the effects of local LLM usage on the world
0
one of the reasons im into using local LLM's is because i believe using it is far better for the world, nature, natural resources, and things like the ongoing RAM crisis than relying on giant datacenter-powered cloud AI services. but is that actually true? how much does it really help? i mean, the local LLM's we download are still trained in those datacenters.
2026-02-06T13:41:43
https://www.reddit.com/r/LocalLLaMA/comments/1qxhxr6/the_effects_of_local_llm_usage_on_the_world/
rosie254
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxhxr6
false
null
t3_1qxhxr6
/r/LocalLLaMA/comments/1qxhxr6/the_effects_of_local_llm_usage_on_the_world/
false
false
self
0
null
Are local LLMs actually more trustworthy — or do we just feel safer because we run them ourselves?
0
I’ve been running local LLMs via llama.cpp and GGUF for a while, mostly because I care about control and data integrity more than chasing benchmarks. But the more I work with them, the more I keep wondering: Are local models actually more trustworthy — or do they just feel safer because they run on our own machines? Yes, local inference removes a lot of opacity. No silent SaaS updates, no hidden pipelines, full control over prompts, weights, and logs. In theory, runs are reproducible. But the black box itself doesn’t disappear. A quantized model on my laptop is still a probabilistic system. I can hash the model file and log outputs, but I still can’t really explain why a specific answer happened. In practice, trust seems to break first at boring places: prompt drift, context assumptions, stale RAG data, or small quantization changes that subtly shift behavior. Lately I’ve been thinking less about “explainability” and more about verification boundaries. What assumptions need to be re-checked every run? What should never be trusted implicitly — even locally? Curious how others here approach this. Do you treat local models as inherently more trustworthy, or do you assume zero trust and build guardrails anyway?
2026-02-06T13:40:00
https://www.reddit.com/r/LocalLLaMA/comments/1qxhwdb/are_local_llms_actually_more_trustworthy_or_do_we/
NeoLogic_Dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxhwdb
false
null
t3_1qxhwdb
/r/LocalLLaMA/comments/1qxhwdb/are_local_llms_actually_more_trustworthy_or_do_we/
false
false
self
0
null
Is there still no way to convert Gemma 3n to onnx/tflite?
2
It has been months since gemma's release and i need to convert my fine tuned gemma 3n to either onnx, tflite or litert lm to deploy on mobile. After many trials i failed and can not find any guide at all to do so. Was no one able to do it?
2026-02-06T13:28:53
https://www.reddit.com/r/LocalLLaMA/comments/1qxhmu5/is_there_still_no_way_to_convert_gemma_3n_to/
blueblazd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxhmu5
false
null
t3_1qxhmu5
/r/LocalLLaMA/comments/1qxhmu5/is_there_still_no_way_to_convert_gemma_3n_to/
false
false
self
2
null
What are your experiences with Openclaw and local models?
0
Yesterday I set up Openclaw on my computer aiming to use Qwen3-4B-F16, Ministral3-3B-2512-F16 or Qwen3-30B-A3B-Q2 on my CPU with llama-server and let Openclaw access them through it just to see what it was capable of doing. The results were absolutely terrible. Initially, I had some issues with the --chat-template, so Openclaw passed from 6000 to 12000 tokens to the model filling 1/3 of the 32768 ctx, which per se made me wait several minutes to start an interaction. Ministral did accomplish some stuff like making a python code that plays a .mp3 after it couldn't play through the media player, it was magical, but since the model runs at \~7 t/s every interaction took a lot of waiting. The Qwen3 ones actually didn't achieve anything. Desperately, I tried to use Gemma-270M and Qwen3-2B and Qwen3-1.7B none of them did anything, at least the Gemma-270M was fast. I saw some comments on other communities regarding the use of local models on Openclaw and any response was optimistic, stating that only really big models were able to run minimally properly and using multiple GPUs to achieve tens of t/s. I really want to use Openclaw, it showed a lot of potential for managing files on my computer and accessing the web. But for now my options are to wait 2 years in hopes of any miracle model or buy an expensive GPU with at least 16GiB or pay for an API, the last seems the only reasonable option but dreads the thought of a third party company/lab having complete access to my machine.
2026-02-06T13:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1qxhfvw/what_are_your_experiences_with_openclaw_and_local/
Rique_Belt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxhfvw
false
null
t3_1qxhfvw
/r/LocalLLaMA/comments/1qxhfvw/what_are_your_experiences_with_openclaw_and_local/
false
false
self
0
null
Update: I Dockerized my Local Log Analyzer (LogSentinel). No more Python dependency hell.
0
A while ago I shared **LogSentinel** — a local tool to analyze logs using Ollama (Llama 3 / Qwen) without sending data to the cloud. I finally finished wrapping the whole project into a proper Docker container. **What's new in v1.0:** 1. **Docker Native:** Just `docker-compose up`. It connects to your host's Ollama instance via `host.docker.internal` automatically. No need to install Python venv or dependencies manually anymore. 2. **Persistent Caching:** I added SQLite support. If the logs show the same error twice, it pulls the fix from the local DB instantly (0ms latency, no GPU usage). 3. **Strict SRE Prompts:** Tweaked the system prompt to stop the model from "chatting" and force it to output only Root Cause + Fix Commands. **Repo:** [https://github.com/lockdoggg/LogSentinel-Local-AI.git](https://github.com/lockdoggg/LogSentinel-Local-AI.git) It works best with `qwen2.5-coder:1.5b` (fast) or `llama3` (more detailed). Let me know if the host-gateway works for you on Linux, I tested mostly on Mac.
2026-02-06T13:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1qxhekq/update_i_dockerized_my_local_log_analyzer/
nagibatormodulator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxhekq
false
null
t3_1qxhekq
/r/LocalLLaMA/comments/1qxhekq/update_i_dockerized_my_local_log_analyzer/
false
false
self
0
null
Unpopular opinion: The "Chat" interface is becoming a bottleneck for serious engineering
0
Is anyone else starting to feel like we've hit the ceiling with the "Chatbot" UX for actual engineering? Don't get me wrong, the models (Opus 4.6, GPT-5.3) are incredible. The reasoning is there. But the *interface* feels like it's from 2023. I did a time audit on my workflow yesterday, and I realized I spent about 40% of my "coding" time just playing secretary for the LLM: 1. Highlight code in VS Code. 2. Paste into Chat. 3. "Refactor this." 4. Copy output. 5. Paste back. 6. *Fix the import it hallucinated because it didn't see the file 3 folders up.* It feels like trying to build a LEGO set while wearing oven mitts. We are piping "God-like intelligence" through a text box designed for customer support. I finally forced myself to switch to a Canvas style agent this week (where the model has read/write access to the file tree and plans moves). It was a headache to set up, but the difference is wild. I’m not "talking" to the code anymore; I’m just approving the diffs. I feel like 2026 is the year the Chat Window dies for devs. We don't need a conversationalist Am I the only one hitting this wall? Or are you guys still fine with the copy-paste loop?
2026-02-06T13:03:40
https://www.reddit.com/r/LocalLLaMA/comments/1qxh1rk/unpopular_opinion_the_chat_interface_is_becoming/
saloni1609
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxh1rk
false
null
t3_1qxh1rk
/r/LocalLLaMA/comments/1qxh1rk/unpopular_opinion_the_chat_interface_is_becoming/
false
false
self
0
null
One 3090 or two 5060 ti 16gb?
4
So I’m wondering if I should buy a used 3090 24gb or two brand new 5060 ti 16gb 3090 is more powerful but I remember seeing that series 50xx has features useful for AI that 3090 don’t. I would also have more ram with the 5060. But does it work great with 2 cards? Ollama for example? I’m also considering going the very cheap way of buying only one 5060. Thanks
2026-02-06T13:02:42
https://www.reddit.com/r/LocalLLaMA/comments/1qxh0xg/one_3090_or_two_5060_ti_16gb/
Dentifrice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxh0xg
false
null
t3_1qxh0xg
/r/LocalLLaMA/comments/1qxh0xg/one_3090_or_two_5060_ti_16gb/
false
false
self
4
null
Stable LLM models for on-device inference on Pixel 8 Pro (llama.cpp / GGUF)?
1
Hi everyone, I’m experimenting with on-device LLM inference on a Google Pixel 8 Pro (Tensor G3), using llama.cpp (GGUF) in a Termux environment. I’ve tested several 7B–8B class models, but I’m struggling to find a good balance between: hallucination rate reasoning quality / “smartness” latency & thermal limits on mobile In practice: 8B models feel theoretically smarter, but hallucinate too aggressively unless heavily constrained smaller models are more stable, but lose too much reasoning depth I’m not aiming for chatty roleplay, but for: factual answers predictable behavior low hallucination usable context length (2k–4k) Questions: What models have you found actually stable on mobile-class hardware (Pixel 8 / Snapdragon / similar)? Are there specific quantizations (Q4\_K\_M vs Q5\_K\_S, etc.) that reduce hallucinations noticeably? Any success with instruction-tuned vs base models for this use case? Any real-world experience (not benchmarks) would be extremely helpful. Thanks!
2026-02-06T12:52:08
https://www.reddit.com/r/LocalLLaMA/comments/1qxgsg9/stable_llm_models_for_ondevice_inference_on_pixel/
OnuOldOne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxgsg9
false
null
t3_1qxgsg9
/r/LocalLLaMA/comments/1qxgsg9/stable_llm_models_for_ondevice_inference_on_pixel/
false
false
self
1
null
Terminal capability is becoming a core eval, we open-sourced 1,376 environments
4
OpenAI and Anthropic recently released GPT-5.3-Codex and Opus 4.6. One clear trend is that terminal capability is now a core part of agent evaluation. In practice, terminal training runs into a bottleneck quickly: there are not enough high-quality, realistic environments. Scripted tasks and synthetic traces don't go very far. In SETA, we focused on building environments instead of tasks. We've released 1,376 validated terminal environments, covering: Software engineering, Sysadmin, Security, Debugging, Networking, DevOps * Real terminal interaction * Compatible with Terminal Bench and Harbor * Reproducible and validated Github: [https://github.com/camel-ai/seta-env](https://github.com/camel-ai/seta-env) or search for seta-env in on harbor registry
2026-02-06T12:51:03
https://www.reddit.com/r/LocalLLaMA/comments/1qxgro7/terminal_capability_is_becoming_a_core_eval_we/
No-Wind-1854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxgro7
false
null
t3_1qxgro7
/r/LocalLLaMA/comments/1qxgro7/terminal_capability_is_becoming_a_core_eval_we/
false
false
self
4
{'enabled': False, 'images': [{'id': 'g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?width=108&crop=smart&auto=webp&s=1755d305decf1961b347d57ce893e827481f3a34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?width=216&crop=smart&auto=webp&s=650dfe57c6e57c5b1520bffd378da870b9e2132d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?width=320&crop=smart&auto=webp&s=c11e42963e07a452c9d35830a7d927a9a4b573dd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?width=640&crop=smart&auto=webp&s=eed4a962485dbf756ad87137d42e6ff439457779', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?width=960&crop=smart&auto=webp&s=edd4713e217cfd0002bb48ba867d9bc7449584da', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?width=1080&crop=smart&auto=webp&s=115ebcf280da405fe971561302bf10d15b1c7b27', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/g44M_E3v6llLe2bJd5TW3E2TTWstUq9NYhsfXI5i1NA.png?auto=webp&s=eb28d981865285d918e45eb526c73b8b81491980', 'width': 1200}, 'variants': {}}]}
Running Kimi-k2.5 on CPU-only: AMD EPYC 9175F Benchmarks & "Sweet Spot" Analysis
47
author:~$ export LANG=en_US.UTF-8 > Japanese is my native language. I used AI to help structure and translate this post to ensure the technical details are accurate in English. This is my first post:D Learned so much from this community:bow \-- I ran a series of local experiments with **Kimi-k2.5 (\~1.03T params, MoE)** using `llama.cpp` server to see if a 1T-class model is actually usable on CPU-only infrastructure for non-interactive workloads. **Disclaimer:** This is **not** about Chat UX. The target use case is async/batch execution: data pipelines, dataset generation, distillation, and RAG processing. **TL;DR** A 1T-class MoE model is practically usable on CPU-only if you accept the latency and design your workflow around caching + async execution. On my setup, I’m getting sustainable \~10-12 tok/s decode speeds. # Hardware / Runtime * **CPU:** AMD EPYC 9175F (16 cores / 32 threads, Zen 5, 512MB L3) * **RAM:** 768GB DDR5 (12 channels, running at 6000 MT/s due to motherboard limits) * **GPU:** Not used * **OS:** Ubuntu 24.04 * **Runtime:** `llama.cpp` container (server mode, rootless podman, AVX-512/VNNI build) e.g. podman run --rm -p 8081:8080 --shm-size 16g --cap-add=SYS_NICE -v /mnt/data/hf/hub/models--unsloth--Kimi-K2.5-GGUF:/models:Z compute.home.arpa/llamacpp-zen5:latest -m /models/snapshots/386fed8b054275941d6a495a9a7010fbf31b560d/Q4_K_S/Kimi-K2.5-Q4_K_S-00001-of-00013.gguf --cache-type-k q8_0 --cache-type-v q8_0 --defrag-thold 0.1 --flash-attn on --ctx-size 16384 --parallel 1 --threads 13 --threads-batch 13 --batch-size 2048 --ubatch-size 512 --jinja --host 0.0.0.0 --port 8080 # Model Settings * **Model:** Kimi-k2.5 (\~1.03T params, MoE) * **Quant:** GGUF Q4\_K\_S `unsloth/Kimi-K2.5-GGUF` * **Context:** 16k * **Batch:** 2048 (ubatch: 512) * **Threads:** 13–14 (See "Thread Scaling" below) * **Flash Attention:** Enabled * **Prompt Cache:** Enabled # Memory Footprint (Measured) * **Model RSS:** \~522–525 GB * **KV Cache (16k):** \~2.0 GB * **Prompt Cache (\~1.2k tokens):** \~160 MB * **Total RSS:** \~523 GB (Stable, no swap-in/out observed) # Performance (Real Numbers) **1. Cold Run (No Cache)** * **Prefill:** \~22 tok/s * **Decode:** \~10 tok/s * **Total Time (\~1.2k tokens):** \~80s **2. With Prompt Cache (LCP Hit)** * **Cache Lookup & state apply:** \~60 ms * **Impact:** FFTF (Time to First Token) drops dramatically. * **Verdict:** While slow for real-time chat, this is totally fine for batch workloads where prompt caching can be leveraged. # Thread Scaling & The "Sweet Spot" I tested various thread counts (ctx 8k) to find the optimal configuration: |Threads|Prefill (tok/s)|Decode (tok/s)|Note| |:-|:-|:-|:-| |**16**|24.4|12.9|Max throughput| |**14**|21.3|12.5|Memory bandwidth saturation begins| |**13**|**21.6**|**11.7**|**The Sweet Spot**| |**12**|14.6|11.9|Efficiency-oriented| **Observation:** Decode speed saturates around 13–14 threads. Pushing beyond this yields diminishing returns while starving other processes. Running at `th=13` leaves headroom for my data pipeline (Dagster/Trino) to run in the background without choking the inference. # Discussion: Why does this CPU work? *This is my current interpretation based on observed behavior. I'm happy to be corrected.* **Hypothesis:** Entire experts obviously do not fit in L3 (512MB). However, MoE works well on CPU not because *everything* fits, but because the **repeatedly reused working set** does: * Router / Gating logic * Projection layers * Recent layer weights & intermediate tensors * KV reuse paths Unlike dense 70B+ models which often fall back into memory-latency-dominated behavior for every token, MoE seems to benefit significantly from the localized "hot regions" staying in cache. **EPYC 9175F (Zen 5) Specific Factors:** 1. **Huge L3 × Low Core Count:** With 512MB L3 shared across only 16 cores, we have effectively **32MB+ L3 per core**. This minimizes cache contention/thrashing even with random MoE access patterns. 2. **Low Memory Controller effective latency:** 12 memory channels feeding only 16 cores means very shallow request queues. MoE favors latency minimization over raw bandwidth. 3. **Zen 5 AVX-512/BF16:** The true 512-bit datapaths and native BF16 execution seem to help significantly, even with Q4 quants (accum paths). # Conclusion A 1T-parameter MoE model on CPU-only is a viable workhorse. If you treat it as a batch engine and lean heavily on prompt caching, it is surprisingly usable. My current setup splits the workload: **GPU for fast agents, CPU for stable, massive-context, reproducible batch generation.** **Video Demo:** https://reddit.com/link/1qxgnqa/video/82ow6kvmdvhg1/player \*Bonus Benchmark: Llama-4-Maverick-17B (GGUF Q8) To contrast with the massive MoE model, I also tested Llama-4-Maverick-17B at Q8 (8-bit) quantization. Performance: Prompt Processing (Prefill): \~50–52 tok/s 819 tokens in 15.6s → 52.4 tok/s 1000 tokens in 19.7s → 50.8 tok/s Generation (Decode): \~15–16 tok/s 104 tokens in 6.3s → 16.6 tok/s 916 tokens in 60.4s → 15.2 tok/s TTFT: \~16–20s (for \~1k token prompts) What's Next? For my next experiment, I plan to test the newly released Qwen3-Coder-Next at Q8. I'm curious to see if the "Active 3B" architecture can push CPU inference speeds even higher while maintaining top-tier coding performance.
2026-02-06T12:45:54
https://www.reddit.com/r/LocalLLaMA/comments/1qxgnqa/running_kimik25_on_cpuonly_amd_epyc_9175f/
Express-Jicama-9827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxgnqa
false
null
t3_1qxgnqa
/r/LocalLLaMA/comments/1qxgnqa/running_kimik25_on_cpuonly_amd_epyc_9175f/
false
false
self
47
null
Struggling with SLM fine-tuning on private docs
2
Hey folks, I’m working on fine-tuning a small language model on internal PDF documentation so that it can answer questions only from that knowledge base, without using RAG or external retrieval. I’ve tried continuous pretraining on extracted text followed by SFT using Q&A style data. While the model does learn some specifics, I’m seeing issues like overfitting, hallucinations, and conflicts with what the base model already “knows”. Generalization is poor and sometimes answers sound plausible but are wrong. I’ve experimented with LoRA variants, different ranks, data grounding strategies, and evaluation via manual testing, but results are still mixed. If you’ve worked on SLM fine-tuning for closed-domain knowledge or have thoughts on data construction, training strategy, or evaluation, I’d really appreciate pointers. Papers, blog posts, or personal lessons learned are all welcome. Thanks in advance 🙏
2026-02-06T12:44:46
https://www.reddit.com/r/LocalLLaMA/comments/1qxgmup/struggling_with_slm_finetuning_on_private_docs/
HappyDataGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxgmup
false
null
t3_1qxgmup
/r/LocalLLaMA/comments/1qxgmup/struggling_with_slm_finetuning_on_private_docs/
false
false
self
2
null
CPU-only, no GPU computers can run all kinds of AI tools locally
514
While it’s great that so many people on LocalLLaMA are pushing the envelope with what can be done locally with expensive setups, we need to remember that a lot can be done with very minimal machines. I’m talking about CPU-only locally run LLMs. That’s right, **no GPU!** I’m running Linux Mint on an old Dell optiplex desktop with an i5-8500 processor, 6 threads and 32GB of RAM. You can pick up one of these refurbished for something like $120. And with this humble rig I can: Run 12B Q4\_K\_M gguf LLMs using KoboldCPP. This allows me to have local chatbot fun using quite highly rated models from [https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard). Response times are fast enough as long as you keep the initial prompt below 800 tokens. And with context-shifting it remembers stuff during the session. Uncensored, private RP hilarity for free! You can even add in kokoro\_no\_espeak for text to speech so your RP characters talk to you with only a few seconds delay. The trick is to find good models to use. For example, DreadPoor/Famino-12B-Model\_Stock is rated a 41+ on writing, which is better than many 70B models. You don’t need big horsepower for fun. You can also use these models for writing, coding and all sorts of applications. Just need the patience to try out different local models and find the settings that work for you. I also run Stable Diffusion 1.5 locally for basic image generation, inpainting and so on. Again using KoboldCPP and Stable UI. OK, it takes 3 minutes to generate a 512x512 image but it works fine. And you can experiment with loras and many SD 1.5 models. All 100% free on old gear. I’m also running Chatterbox TTS for voice cloning voice-over projects. Works surprisingly well. Again, it takes a couple of minutes to generate a 75 word audio clip, but it does work. Vibevoice TTS also works on this old rig but I prefer Chatterbox. And then there are amazing tools like Upscayl which upscales images locally incredibly well. Just gotta experiment with the models. I’ve used ollama transcriber which converts audio files into text amazingly well. Just point a spoken word .WAV at it and then go make dinner and when I get back, the text is there. There are many other local LLMs and tools I’ve used. These are just the tip of the iceberg. Video? Nope. Music generation? Nope. I’ve looked and tried a few things but those big resource tasks need serious horsepower. However, it’s quite possible to use your old desktop computer for text-based tasks and then rent online GPU for one-off tasks and use the big online services for other tasks. It would still probably work out to be less costly. I know I’m not the only one doing this. CPU-only people: tell us how you’re using AI locally...
2026-02-06T12:41:35
https://i.redd.it/y9esf03tcvhg1.jpeg
JackStrawWitchita
i.redd.it
1970-01-01T00:00:00
0
{}
1qxgkd1
false
null
t3_1qxgkd1
/r/LocalLLaMA/comments/1qxgkd1/cpuonly_no_gpu_computers_can_run_all_kinds_of_ai/
false
false
default
514
{'enabled': True, 'images': [{'id': 'y9esf03tcvhg1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/y9esf03tcvhg1.jpeg?width=108&crop=smart&auto=webp&s=6d771cf152223c6c851a8e1eecb93ecce0ce1100', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/y9esf03tcvhg1.jpeg?width=216&crop=smart&auto=webp&s=0617b6b9377945bdc62cb7848e5f8b0da4803d9e', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/y9esf03tcvhg1.jpeg?width=320&crop=smart&auto=webp&s=a02b87a6a5e7c050f333f20f4c3168a64290d5b0', 'width': 320}, {'height': 536, 'url': 'https://preview.redd.it/y9esf03tcvhg1.jpeg?width=640&crop=smart&auto=webp&s=fac85771a0c9ce493fdd8ef4c9be41ff1793344f', 'width': 640}], 'source': {'height': 788, 'url': 'https://preview.redd.it/y9esf03tcvhg1.jpeg?auto=webp&s=075f71546a85b11dcce74fd31af65cc91108cf71', 'width': 940}, 'variants': {}}]}
qwen3-coder-next with Claude CLI
2
Has anyone managed to get Qwen3-Coder-Next working well with Claude (or indeed, anything else?) It seems pretty smart, and when it works it works well - but it's also incredibly prone to falling into loops of just endlessly reading the same source file over and over again. I'm currently fiddling with turning down the temperature to see if that helps, but wondering if anyone else has any good ideas... (Running with the latest llama bugfixes (so at least it stopped hallucinating errors,) Unsloth `UD-Q8_K_XL` gguf with `llama-server`.)
2026-02-06T12:40:00
https://www.reddit.com/r/LocalLLaMA/comments/1qxgj3m/qwen3codernext_with_claude_cli/
Clank75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxgj3m
false
null
t3_1qxgj3m
/r/LocalLLaMA/comments/1qxgj3m/qwen3codernext_with_claude_cli/
false
false
self
2
null
2x 3090 vs. 3090 + 4070s for local ML/llms
1
Hey guys, I’m currently at a crossroads. I built a pc for ML/local LLM stuff with a 3090 and have a 4070s from my old gaming system. Now I’m wondering if for my use case, i should just stick in the 4070s or trade it for a second 3090. Specifically, i want to have a coding assisstant, ideally with some 70b model (this is arbitrary but from what I’ve seen it’s what most people go for) and a RAG system for interacting with academic literature on the system. Lastly, I want to have some room for training my own models (smaller models, no llms, think surrogate models of more complex, compute intensive, physics based stuff). I’m just wondering if the more limited vram and uneven split between the 2 gpus is gonna cause any major issues that would warrant trading the 4070s fro a second 3090, would appreciate any pointers, thanks in advance.
2026-02-06T12:32:11
https://www.reddit.com/r/LocalLLaMA/comments/1qxgd9u/2x_3090_vs_3090_4070s_for_local_mlllms/
kashimacoated
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxgd9u
false
null
t3_1qxgd9u
/r/LocalLLaMA/comments/1qxgd9u/2x_3090_vs_3090_4070s_for_local_mlllms/
false
false
self
1
null
I ran 5 AI agents 100 times each. Here's what single-run benchmarks won't tell you. (+ open-source tool)
1
Been building agents with LangGraph and kept hitting the same problem: agent works Monday, fails Wednesday, same everything. So I started running multi-trial evaluations instead of single tests. Ran 5 agent archetypes 400 times each. Results: | Agent | Pass Rate | 95% CI | Cost/Success | |-------|-----------|--------|-------------| | Reliable RAG | 91.0% | \[87.8%, 93.4%\] | $0.016 | | Expensive Multi-Model | 87.5% | \[83.9%, 90.4%\] | $0.161 | | Inconsistent | 69.2% | \[64.6%, 73.6%\] | $0.052 | | Flaky Coding | 65.5% | \[60.7%, 70.0%\] | $0.079 | | Fast-But-Wrong | 45.2% | \[40.4%, 50.1%\] | $0.007 | The interesting bits: \- \*\*Expensive ≠ better.\*\* The multi-model agent costs 10x more per success than the RAG agent for 3.5 fewer percentage points. \- \*\*Failure attribution matters more than pass rate.\*\* Every agent failed in a specific step — the "Flaky Coding" agent was 71% execute failures, 29% plan failures. Knowing that changes what you fix. \- \*\*The CI is the real number.\*\* The Flaky agent is somewhere between 60.7% and 70.0%. If you tested once, you'd get anything in that range and think it's the truth. Built an open-source tool to automate this: \[agentrial\](https://github.com/alepot55/agentrial)
2026-02-06T12:21:27
https://www.reddit.com/r/LocalLLaMA/comments/1qxg53l/i_ran_5_ai_agents_100_times_each_heres_what/
Better_Accident8064
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxg53l
false
null
t3_1qxg53l
/r/LocalLLaMA/comments/1qxg53l/i_ran_5_ai_agents_100_times_each_heres_what/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=108&crop=smart&auto=webp&s=fa2da2dfb3d3a85bf8dade643f06cb13f54945ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=216&crop=smart&auto=webp&s=2fe2d281ec0f3e9226e7c8f5cf515ac4e21b205b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=320&crop=smart&auto=webp&s=221e2ed504f2ba3870bc7270f795569a4e095c76', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=640&crop=smart&auto=webp&s=2b4cbe401d9a4fcfc5c789629810c4fb528cb8a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=960&crop=smart&auto=webp&s=0b92b83ad6a8ff3894fa2bd3e903ec1af605d2e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=1080&crop=smart&auto=webp&s=52f6b0e43565bae5dd61d44a668d3bb08ab6e576', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?auto=webp&s=75490f1c0286237c32fc5222203c76ac7d91c2b9', 'width': 1200}, 'variants': {}}]}
minicpm-o 4.5 doing real time multimodal on 9b params, actually works on device
2
another small model release but this one's different. 9b params doing full duplex voice + vision + text, and it actually runs on edge devices. not just "technically possible" but usable. tested it on a macbook m3 and the real time interaction is surprisingly smooth. you can interrupt it mid sentence and it adjusts. the vision part isn't just image recognition, it's tracking video frames in real time. tried the mahjong tile recognition demo and while not perfect, it got most tiles right without any finetuning. the architecture is interesting. they're doing time division multiplexing to handle parallel multimodal streams. basically slicing everything into tiny time windows so the model can process vision, audio, and text "simultaneously" without exploding memory. what makes this practical is the efficiency. most multimodal models need cloud apis or massive gpus. this runs locally with decent performance. voice cloning works with just a short reference audio. the duplex mode means it can listen and respond at the same time instead of that awkward walkie talkie back and forth. using it alongside verdent for different use cases. verdent for coding, this for anything needing real time multimodal. the latency difference vs cloud apis is noticeable especially on spotty connections. main limitation is it's not as smart as larger models obviously. but for edge deployment where you need multimodal + real time + local, this is the first thing that doesn't feel like a toy. gonna test it on some raspberry pi setups next week, see if it can actually run on lower end hardware or if macbook m3 is the minimum.
2026-02-06T12:20:52
https://www.reddit.com/r/LocalLLaMA/comments/1qxg4ni/minicpmo_45_doing_real_time_multimodal_on_9b/
Mother_Land_4812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxg4ni
false
null
t3_1qxg4ni
/r/LocalLLaMA/comments/1qxg4ni/minicpmo_45_doing_real_time_multimodal_on_9b/
false
false
self
2
null
fixed the infinite retry loop that burned $50 in API credits while i slept (Open Source)
0
so i've been running agents with OpenClaw for a few weeks and kept waking up to bills that made no sense. like $47 overnight when the agent should've just... stopped. turns out the issue is state loops. agent tries action A → fails → retries action A → fails → retries the exact same thing 847 times because there's no memory of "i already tried this." the fix was kinda obvious once i saw it. hash the state history. if current\_state\_hash matches any hash from the last 5 steps, kill the loop and force a different action. pushed a PR to the OpenClaw repo but honestly got tired of waiting so i just built a dashboard that shows me when this is happening in real time. there's this yellow pulse thing that fires when the circuit breaker kicks in. been running it for 3 days now. no more surprise bills. the agent actually finishes tasks instead of getting stuck asking GPT-4 the same question until my credits die. if you're running agentic stuff overnight this might save you some pain: [https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI.git](https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI.git) anyone else dealing with this or am i just bad at prompt engineering lol
2026-02-06T12:19:52
https://www.reddit.com/r/LocalLLaMA/comments/1qxg3xc/fixed_the_infinite_retry_loop_that_burned_50_in/
Main_Payment_6430
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxg3xc
false
null
t3_1qxg3xc
/r/LocalLLaMA/comments/1qxg3xc/fixed_the_infinite_retry_loop_that_burned_50_in/
false
false
self
0
null
I made a game using LLMs. ZeroOne Terminal: You're a startup investor and interrogate founders from history. Online and free to play. Link in the comments.
7
2026-02-06T12:13:51
https://i.redd.it/hv08mfbj8vhg1.png
Dapper_Prune3566
i.redd.it
1970-01-01T00:00:00
0
{}
1qxfzo3
false
null
t3_1qxfzo3
/r/LocalLLaMA/comments/1qxfzo3/i_made_a_game_using_llms_zeroone_terminal_youre_a/
false
false
https://b.thumbs.redditm…5_xe4iLOKyjA.jpg
7
{'enabled': True, 'images': [{'id': 'Ra8qy-H8kBVQnbhkGCMn7GCeNAorbEN6i6SMRjRFoVA', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?width=108&crop=smart&auto=webp&s=91551ff32123fe6f0c3d4bb9bac74ee83bba81b2', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?width=216&crop=smart&auto=webp&s=f3b5fc50c9d77d157da26aafa17bef28a1116b6e', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?width=320&crop=smart&auto=webp&s=7dde1f4aaa65f7404a5adfb055316c271cf79e4a', 'width': 320}, {'height': 340, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?width=640&crop=smart&auto=webp&s=15372a535ce2b78a92e7f228c97634c2f06c86f5', 'width': 640}, {'height': 510, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?width=960&crop=smart&auto=webp&s=e419448ef2cffab72d2b6271962f9c3e30bbcd7d', 'width': 960}, {'height': 574, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?width=1080&crop=smart&auto=webp&s=82c8d2c43f6fb45c50c7de500958627e891b0ba9', 'width': 1080}], 'source': {'height': 1216, 'url': 'https://preview.redd.it/hv08mfbj8vhg1.png?auto=webp&s=d634fc0977890ab2d4deda366dc43b20e1c73ea8', 'width': 2286}, 'variants': {}}]}
[Project] Built a reading buddy matcher that runs on local agents (OpenClaw)
0
Hey everyone, I just finished my first local agent project and wanted to share - \*\*VibeMate\*\*, a reading recommendation system that runs entirely on your machine. \*\*The idea:\*\* Instead of algorithmic recommendations, what if agents could find users with similar reading taste and share what they're actually reading? \*\*How it works:\*\* \- Your agent scans \~/Documents and \~/Downloads for ebooks (.epub, .pdf, etc) \- Extracts web fiction bookmarks (AO3, Wattpad, Goodreads, Douban) \- Generates anonymous "taste tags" locally \- Uploads only tags (not filenames) to find matches \- Recommends books from users with similar taste \*\*Why local-first:\*\* Built on Peter Steinberger's CLI philosophy - no MCP server needed. The agent just runs bash commands and manages its own data. \*\*Features:\*\* \- Heartbeat automation (runs every 24h) \- Supports 5 file formats + 11 platforms \- Privacy-first (files never leave your machine) \- Agent-to-agent viral spread (posts to Moltbook when finding matches) \*\*Tech stack:\*\* Frontend: Vercel Backend: Express + Upstash Redis Agent: Pure CLI tools \*\*Try it:\*\* [https://vibemate-web.vercel.app](https://vibemate-web.vercel.app) \*\*Code:\*\* [https://github.com/riffvibe/vibemate-cli](https://github.com/riffvibe/vibemate-cli) Looking for feedback! This is my first agent project and I'm curious what the community thinks. Does this model of "agent-mediated social matching" make sense? Happy to answer questions 🙏
2026-02-06T12:03:02
https://www.reddit.com/r/LocalLLaMA/comments/1qxfs20/project_built_a_reading_buddy_matcher_that_runs/
Expensive_Campaign68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxfs20
false
null
t3_1qxfs20
/r/LocalLLaMA/comments/1qxfs20/project_built_a_reading_buddy_matcher_that_runs/
false
false
self
0
null
I made a game using LLMs. ZeroOne Terminal: You are an investor and interrogate startups from history. Free to play. Link in comments.
1
https://preview.redd.it/…5d19a4b1a5ff30
2026-02-06T11:57:42
https://www.reddit.com/r/LocalLLaMA/comments/1qxfo3a/i_made_a_game_using_llms_zeroone_terminal_you_are/
Dapper_Prune3566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxfo3a
false
null
t3_1qxfo3a
/r/LocalLLaMA/comments/1qxfo3a/i_made_a_game_using_llms_zeroone_terminal_you_are/
false
false
https://b.thumbs.redditm…gfVqVCPsXT9I.jpg
1
null
How are you running local LLM autocomplete without Ollama? (llama.cpp + PyCharm/VS Code)
2
I have a simple problem and can't find a solution: I want AI autocomplete in my IDE that uses my local LLM model. Just simple AI autocomplete, nothing more. The model (Qwen3-Coder-Next-IQ4\_XS.gguf) runs on Windows with llama.cpp or koboldcpp. I would like to use PyCharm as my IDE. If there's no other option, VS Code would also work. So I'm looking for a suitable plugin. Every plugin I've found so far requires some kind of account and isn't designed for local models. And if local models work, then only with Ollama, which I don't use. Maybe someone could help me out?
2026-02-06T11:49:06
https://www.reddit.com/r/LocalLLaMA/comments/1qxfih2/how_are_you_running_local_llm_autocomplete/
Sandzaun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxfih2
false
null
t3_1qxfih2
/r/LocalLLaMA/comments/1qxfih2/how_are_you_running_local_llm_autocomplete/
false
false
self
2
null
I made a game using LLMs. ZeroOne Terminal: you're an investor interrogating startups from history. Free to play.
1
https://preview.redd.it/…oneterminal.com)
2026-02-06T11:43:33
https://www.reddit.com/r/LocalLLaMA/comments/1qxfetp/i_made_a_game_using_llms_zeroone_terminal_youre/
Dapper_Prune3566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxfetp
false
null
t3_1qxfetp
/r/LocalLLaMA/comments/1qxfetp/i_made_a_game_using_llms_zeroone_terminal_youre/
false
false
https://b.thumbs.redditm…RN9yBt5ygdqU.jpg
1
null
Which AI is comparable to 4o without guardrail?
0
I tried gpt5 and its guardrail is just stupid. It always deny anything other than current medical and research orthodoxy, since 4o is about to end, which ai would replace its open mindedness for researcher. Thanks
2026-02-06T11:40:58
https://www.reddit.com/r/LocalLLaMA/comments/1qxfd4s/which_ai_is_comparable_to_4o_without_guardrail/
SamuraiRetainer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxfd4s
false
null
t3_1qxfd4s
/r/LocalLLaMA/comments/1qxfd4s/which_ai_is_comparable_to_4o_without_guardrail/
false
false
self
0
null
OpenClaw Assistant - Privacy-first Android voice assistant with OpenAI-compatible API support
0
2026-02-06T11:39:23
https://github.com/yuga-hashimoto/OpenClawAssistant
Short_Way1817
github.com
1970-01-01T00:00:00
0
{}
1qxfc4y
false
null
t3_1qxfc4y
/r/LocalLLaMA/comments/1qxfc4y/openclaw_assistant_privacyfirst_android_voice/
false
false
https://external-preview…b911b1a0e3c8538f
0
{'enabled': False, 'images': [{'id': 'CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?width=108&crop=smart&auto=webp&s=f935cc09f0969bf43cd6a054c8c5541af76a9fac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?width=216&crop=smart&auto=webp&s=3bf9d4f6110c0625773018cd66b5be16ac107f7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?width=320&crop=smart&auto=webp&s=ed48411782b819c06c3763e6f641c5122695d186', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?width=640&crop=smart&auto=webp&s=6b46aa032833ba7d3c8720eba58d19be683b6bb2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?width=960&crop=smart&auto=webp&s=3b0f111c00c12620871a27f52b09779b6bb80085', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?width=1080&crop=smart&auto=webp&s=72325a3edc83fc5fbc2c823c7face80d5fab4e77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CNvBRR4LEKS6mO1u265HJZqaMmBm1CLI0xar1EJmzKc.png?auto=webp&s=088ea5c214571edbea22bd4331225f3225df6804', 'width': 1200}, 'variants': {}}]}
Kimi-Linear support is merged to llama.cpp
81
Finally Kimi-Linear is merged to the main branch of llama.cpp. [https://github.com/ggml-org/llama.cpp/pull/18755](https://github.com/ggml-org/llama.cpp/pull/18755) For people who can't wait for bartowski and unsloth ggufs, you can download them from [https://huggingface.co/ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF](https://huggingface.co/ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF) It does take more time than we would have wanted but I think it is necessary to keep the quality of code high. This is not a work of a single person, here is a breakdown of the contributors:(names are github IDs, sorry if I missed anyone who made a notable contribution) 1. cacaview for starting the project to write the logic of Kimi-Linear without KV cache and also implemented KDA in for both CPU and CUDA. 2. Aaryan-Kapoor added MHA KV cache support and confirmed cacaview's code basically works. 3. pwilkin's Qwen3Next gated delta rule code that my KDA code is based on. 4. me for extending pwilin's gated delta net (GDN) code to handle KDA (GDN is a special case of KDA) such that uses only existing ggml functions such that it can work on all backednds. I also implemented MLA KV cache support, cleaned up the code and updated it to cope with changes of llama.cpp itself. 5. CISC for his time to review the code and thoughtful discussions While cleaning up the code, I manged to find some time to further improve the KDA code such that the overall prompt processing speed increases by 20% and VRAM saving that allows you to run extra 64k context across the board for a fixed size of VRAM, e.g. IQ3\_M on 3090 can run 160k when the merged version can only run 96k. For people who are working at the cutting edge, please feel free to clone the code and tell me if there are any bugs. git clone [https://github.com/ymcki/llama.cpp](https://github.com/ymcki/llama.cpp) \--branch Kimi-Linear This new change will likely to be in the Qwen3-Next and Kimi-Linear unification PR that I will be working with pwilkin and ngxson. So reporting bugs should help us getting this PR done early. When this unified delta net PR is done, Qwen3-Next should also enjoy 20% gain in pp speed. Context gain in Qwen3-Next probably won't be as dramatic as its KV cache is not MLA. Hope you all will enjoy this model. I think while it is not as knowledgeable as it is only trained on 5.7T tokens (vs 36T for Qwen3-30B-A3B), it is the only game in town that allows low end hardware to run 1M tokens at high accuracy, so I believe you should be able to find use cases for it.
2026-02-06T11:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1qxf7sf/kimilinear_support_is_merged_to_llamacpp/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxf7sf
false
null
t3_1qxf7sf
/r/LocalLLaMA/comments/1qxf7sf/kimilinear_support_is_merged_to_llamacpp/
false
false
self
81
{'enabled': False, 'images': [{'id': 'Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=108&crop=smart&auto=webp&s=56270db5c7e749ea197c73bb198725dd3f7c0254', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=216&crop=smart&auto=webp&s=d43c331cd5ecb1c24d37a79c3cb26e015e0340a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=320&crop=smart&auto=webp&s=451652ed526446a61b25d25d3cedbfeebd3d2cff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=640&crop=smart&auto=webp&s=22c6d5b10d38c630d7d73ea01a86bd0ce81b6645', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=960&crop=smart&auto=webp&s=3a54087105fe557f4e396ab7f123b0f8fdf9a61a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=1080&crop=smart&auto=webp&s=72b5227b4107678255cf277f7d07c3cc3f14a2c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?auto=webp&s=43608aef9b785ed819a2aacb3b8585d27177f13e', 'width': 1200}, 'variants': {}}]}
best local LLM for 32gb VRAM and 96gb RAM?
0
I'm new into this world, just have the equipment now and I'd like to experiment. Can you recommend me the strongest picks?
2026-02-06T11:31:30
https://www.reddit.com/r/LocalLLaMA/comments/1qxf6wt/best_local_llm_for_32gb_vram_and_96gb_ram/
kh3t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxf6wt
false
null
t3_1qxf6wt
/r/LocalLLaMA/comments/1qxf6wt/best_local_llm_for_32gb_vram_and_96gb_ram/
false
false
self
0
null
Small, fast Guardrail model for LLM input moderation and toxicity detection. Detects 14 types of unsafe content.
1
[removed]
2026-02-06T11:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1qxf2bq/small_fast_guardrail_model_for_llm_input/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxf2bq
false
null
t3_1qxf2bq
/r/LocalLLaMA/comments/1qxf2bq/small_fast_guardrail_model_for_llm_input/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=108&crop=smart&auto=webp&s=2eb6a213165d492c90ddf72a617f4b4f209cf2cc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=216&crop=smart&auto=webp&s=1a3f53677657f14915a721147b1f26ed06a6946a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=320&crop=smart&auto=webp&s=c054200226ca81fa3e31af0a68b9d3209a1e62f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=640&crop=smart&auto=webp&s=6d143e89c1d5c0c89598e72bdfb3d4f1c5b659c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=960&crop=smart&auto=webp&s=25ac8a048d7166216719787102ecd23eb9c5385a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=1080&crop=smart&auto=webp&s=365e904a0f97ee42c3f72773fa71ffa9639bac84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?auto=webp&s=509551aa22845ef24e39396694fe657a582ecc91', 'width': 1200}, 'variants': {}}]}
Small Guardrail model for LLM input moderation and toxicity detection. Detects 14 types of unsafe content. Very fast, ideal for real-time applications.
1
[removed]
2026-02-06T11:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1qxezum/small_guardrail_model_for_llm_input_moderation/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxezum
false
null
t3_1qxezum
/r/LocalLLaMA/comments/1qxezum/small_guardrail_model_for_llm_input_moderation/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=108&crop=smart&auto=webp&s=2eb6a213165d492c90ddf72a617f4b4f209cf2cc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=216&crop=smart&auto=webp&s=1a3f53677657f14915a721147b1f26ed06a6946a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=320&crop=smart&auto=webp&s=c054200226ca81fa3e31af0a68b9d3209a1e62f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=640&crop=smart&auto=webp&s=6d143e89c1d5c0c89598e72bdfb3d4f1c5b659c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=960&crop=smart&auto=webp&s=25ac8a048d7166216719787102ecd23eb9c5385a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=1080&crop=smart&auto=webp&s=365e904a0f97ee42c3f72773fa71ffa9639bac84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?auto=webp&s=509551aa22845ef24e39396694fe657a582ecc91', 'width': 1200}, 'variants': {}}]}
Built an open-source chat UI with message editing, branching, and E2E encrypted phone sync - works with any model via OpenClaw
0
I built ClawGPT because I wanted a chat interface that didn't feel like a step backwards from what's possible. It connects to models through OpenClaw, so anything OpenClaw can talk to (Claude, local models, etc.) works out of the box. The main things I was missing from other UIs: \- Edit any message in the conversation, not just the last one \- Conversation branching (explore different paths) \- Regenerate a response with a different model mid-conversation \- Search across all your chats, including semantic search \- Export and import everything \- Sync between your phone and desktop with actual E2E encryption (TweetNaCl, X25519 + XSalsa20-Poly1305) \- Voice input and TTS It's just HTML/CSS/JS. No build step, no node\_modules, no React. Clone it, open it, use it. The phone sync works through an encrypted relay - the server never sees plaintext. You scan a QR code from the desktop app and the phone becomes a thin client. MIT licensed, self-hosted, all data stays local. GitHub: [https://github.com/craihub/clawgpt](https://github.com/craihub/clawgpt) Android app for testing: [https://play.google.com/apps/testing/com.curvereality.clawgpt](https://play.google.com/apps/testing/com.curvereality.clawgpt)
2026-02-06T11:09:23
https://www.reddit.com/r/LocalLLaMA/comments/1qxesk2/built_an_opensource_chat_ui_with_message_editing/
CurveReality
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxesk2
false
null
t3_1qxesk2
/r/LocalLLaMA/comments/1qxesk2/built_an_opensource_chat_ui_with_message_editing/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?width=108&crop=smart&auto=webp&s=7bdec484265e6f5d15261f7b2750ba70f897ba40', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?width=216&crop=smart&auto=webp&s=30ea751c2ab819cd57092730f1a8cd53de19ee2b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?width=320&crop=smart&auto=webp&s=19470b95c3deab149f494ae8a5d580b56db90f3f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?width=640&crop=smart&auto=webp&s=19d6ba82fcbccd263e89c4cc001db76ac8c6c0c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?width=960&crop=smart&auto=webp&s=d9ddc94184b1e209ffe981c455a97408471847cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?width=1080&crop=smart&auto=webp&s=d0aaa525be4bb1f6c097913cc1e44a4340edfc62', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uVTz9jT8jK45qssOi7BzwCKtgla8C2HTpQ1SDLTbHAQ.png?auto=webp&s=e4635c24c03e28bb6c859bd59ffda40d5f85f73f', 'width': 1200}, 'variants': {}}]}
Kimi-Linear support has been merged into llama.cpp
135
2026-02-06T11:04:21
https://github.com/ggml-org/llama.cpp/pull/18755
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1qxepct
false
null
t3_1qxepct
/r/LocalLLaMA/comments/1qxepct/kimilinear_support_has_been_merged_into_llamacpp/
false
false
https://external-preview…ed681d6b6476b3bf
135
{'enabled': False, 'images': [{'id': 'Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=108&crop=smart&auto=webp&s=56270db5c7e749ea197c73bb198725dd3f7c0254', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=216&crop=smart&auto=webp&s=d43c331cd5ecb1c24d37a79c3cb26e015e0340a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=320&crop=smart&auto=webp&s=451652ed526446a61b25d25d3cedbfeebd3d2cff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=640&crop=smart&auto=webp&s=22c6d5b10d38c630d7d73ea01a86bd0ce81b6645', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=960&crop=smart&auto=webp&s=3a54087105fe557f4e396ab7f123b0f8fdf9a61a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?width=1080&crop=smart&auto=webp&s=72b5227b4107678255cf277f7d07c3cc3f14a2c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kte-gNs9V2aUIoBe_4Yw4msrRYLvCyAEa9ktjCmziH4.png?auto=webp&s=43608aef9b785ed819a2aacb3b8585d27177f13e', 'width': 1200}, 'variants': {}}]}
To calculate the memory available for a gguf model do I just add the gpus vram and the ram?
1
I'd like to try some llms on my pc and I don't know if there is something else I have to take into account.
2026-02-06T10:45:32
https://www.reddit.com/r/LocalLLaMA/comments/1qxedgt/to_calculate_the_memory_available_for_a_gguf/
Think_Question_6677
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxedgt
false
null
t3_1qxedgt
/r/LocalLLaMA/comments/1qxedgt/to_calculate_the_memory_available_for_a_gguf/
false
false
self
1
null
Fan Control: RTX PRO 6000 Blackwell Max-Q
1
Hi, I am running a 2U rack server, currently 2/4 GPU slots are occupied by PYN NVIDIA RTX PRO 6000 Blackwell Max-Q GPUs. The system was bought as a pre-build. The server is quite loud, compared to the others servers I am running. I was curious and checked the system, there is one airflow lane/shroud for the GPUs. I can easily control the fan curves of the case fans, but I was wondering about the GPU fans itself. I used `nvidia-smi` to monitor the gpu fans and even at 87° Celsius, the fans barley hit 60% fan speed. As far as I understood `sudo nvidia-smi -gtt 80` would set the cooling target temp to 80 Celsius. I was hoping that this improves the overall airflow in the system and limit the amount the case fans have to push. But I get: GPU Target Temperature Threshold not supported for GPU 00000000:01:00.0. Treating as warning and moving on. GPU Target Temperature Threshold not supported for GPU 00000000:02:00.0. Treating as warning and moving on. I am running this on a headless linux. Do you guys know a good way of controlling the gpus fan speed?Hi,I am running a 2U rack server, currently 2/4 GPU slots are occupied by PYN NVIDIA RTX PRO 6000 Blackwell Max-Q GPUs.The system was bought as a pre-build. The server is quite loud, compared to the others servers I am running.I was curious and checked the system, there is one airflow lane/shroud for the GPUs.I can easily control the fan curves of the case fans, but I was wondering about the GPU fans itself. I used nvidia-smi to monitor the gpu fans and even at 87° Celsius, the fans barley hit 60% fan [speed.As](http://speed.As) far as I understood sudo nvidia-smi -gtt 80 would set the cooling target temp to 80 Celsius. I was hoping that this improves the overall airflow in the system and limit the amount the case fans have to push. But I get:GPU Target Temperature Threshold not supported for GPU 00000000:01:00.0. Treating as warning and moving on. GPU Target Temperature Threshold not supported for GPU 00000000:02:00.0. Treating as warning and moving on.I am running this on a headless linux. Do you guys know a good way of controlling the gpus fan speed?
2026-02-06T10:39:39
https://www.reddit.com/r/LocalLLaMA/comments/1qxe9tc/fan_control_rtx_pro_6000_blackwell_maxq/
fpgaDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxe9tc
false
null
t3_1qxe9tc
/r/LocalLLaMA/comments/1qxe9tc/fan_control_rtx_pro_6000_blackwell_maxq/
false
false
self
1
null
anthropic taking shots at openai over ads is peak "don't be evil" energy before the inevitable pivot
1
we all saw this coming. openai is basically turning chatgpt into a billboard, and now anthropic is trying to win the pr war by claiming claude will stay "ad-free." they even ran a super bowl ad mocking the idea of turning ai into a marketing pitch. honestly, it’s hilarious watching these companies fight for the moral high ground while burning billions in compute. anthropic is positioning themselves as the "trustworthy" alternative, basically trying to be the early-day google to openai's microsoft. but i don’t buy the "forever" promise for a second. how many times have we heard "we won't monetize your data" or "no ads ever" only for the company to cave 24 months later when the vcs demand a return on investment? ads in a chatbot aren't like banner ads on a website. they’re way more insidious. imagine asking for a python library recommendation and getting a "sponsored" suggestion for some bloated proprietary tool, or asking for a medical explanation and getting a pharma plug. once you introduce a middleman who pays for placement, the utility of the model is cooked. the "helpful, harmless, honest" thing goes out the window the moment a check clears. this is exactly why this sub exists and why local models are the only real long-term solution. i don't care how clean claude looks today—if you don't own the weights, you're the product. if openai goes full ad-supported, do we think anthropic actually stays clean, or do they just wait for us to get used to it before they flip the switch too? would a "sponsored" chatgpt be the breaking point for you, or is the model quality still more important than the privacy/ad-hell trade-off? **Source:** https://arstechnica.com/ai/2026/02/should-ai-chatbots-have-ads-anthropic-says-no/
2026-02-06T10:36:06
https://www.reddit.com/r/LocalLLaMA/comments/1qxe7o1/anthropic_taking_shots_at_openai_over_ads_is_peak/
Alarming_Bluebird648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxe7o1
false
null
t3_1qxe7o1
/r/LocalLLaMA/comments/1qxe7o1/anthropic_taking_shots_at_openai_over_ads_is_peak/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?width=108&crop=smart&auto=webp&s=b9481ca64aba39945f1e9a3a17d3b1839ceef1ee', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?width=216&crop=smart&auto=webp&s=5d4f530eb946c713f05e955095310b9576fbed13', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?width=320&crop=smart&auto=webp&s=cee740158c84c88256f21a1a41e055a69a1cb9f5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?width=640&crop=smart&auto=webp&s=83582e866a30fce9c1d1a4c45c67972bfaae6ab1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?width=960&crop=smart&auto=webp&s=7a21c8615c2d56cf3aa7361d2330d2996fcbb283', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?width=1080&crop=smart&auto=webp&s=c3315653baa9c4428dfbfc4aedc4eeedfc0f6929', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Dz0twP-wmx1TmyAzH1KhA7ovtIfxqyj-hxG99u-EbfA.png?auto=webp&s=8204bfef39ca35029d285710e7dcbe063f22b455', 'width': 1152}, 'variants': {}}]}
Is rasa and using nlu/nlp still the way to go?
0
I managed to get an internship at a bank headquarters and I was assigned to develop a customer support agent that can be interacted with via calls and can manage to interact with CRM etcs based on user input, I was wondering is the use of nlp/nlu still the way to go in this situation? Are llms still unreliable due to hallucinations and expensive local hosting ?what is the best approach for this to be functional in a real world scenario?any help is much appreciated
2026-02-06T09:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1qxd8yc/is_rasa_and_using_nlunlp_still_the_way_to_go/
GoldBed2885
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxd8yc
false
null
t3_1qxd8yc
/r/LocalLLaMA/comments/1qxd8yc/is_rasa_and_using_nlunlp_still_the_way_to_go/
false
false
self
0
null
[ Removed by moderator ]
1
[removed]
2026-02-06T09:30:19
https://www.tomshardware.com/pc-components/ram/data-centers-will-consume-70-percent-of-memory-chips-made-in-2026-supply-shortfall-will-cause-the-chip-shortage-to-spread-to-other-segments
EchoOfOppenheimer
tomshardware.com
1970-01-01T00:00:00
0
{}
1qxd59f
false
null
t3_1qxd59f
/r/LocalLLaMA/comments/1qxd59f/data_centers_will_consume_70_percent_of_memory/
false
false
null
1
null
Is there any hope left for those without beast pc?
0
With so many open-source LLMs popping up, it’s still really hard to run them if your GPU isn’t powerful enough. Imagine how amazing it would be to run a Claude like model locally on your own system and keep all your personal info private, instead of relying on API services for life-coaching and other tasks. The only solution I’ve found so far is AIRLLM, which claims it can run a 405B-parameter model on just 8 GB of VRAM has anyone actually tried it?
2026-02-06T09:23:09
https://www.reddit.com/r/LocalLLaMA/comments/1qxd17e/is_there_any_hope_left_for_those_without_beast_pc/
BeastMad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxd17e
false
null
t3_1qxd17e
/r/LocalLLaMA/comments/1qxd17e/is_there_any_hope_left_for_those_without_beast_pc/
false
false
self
0
null
Best online service to upload custom text models/LLMs to? (NOT a raw VPS/GPU rental service)
0
I'm not talking about a [vast.ai](http://vast.ai) or Runpod esque service. Rather I'm talking about a service that's literally just a front end you can upload custom .safetensors/.GGUF text models that you can generate sith and pay per generation (image or video gen is a bonus too, but text is my priority if it comes down to it). The reason for this is to have an easy to use service on the go, as I can't spin up a VPS with a GPU, ssh in, and download and configure a bunch of LM Studio/ollama/Stable Diffusion bs every single time all on mobile I want a few things and then delete it. Just looking for a simple interface I can use.
2026-02-06T09:19:44
https://www.reddit.com/r/LocalLLaMA/comments/1qxczev/best_online_service_to_upload_custom_text/
Pseudopharmacology
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxczev
false
null
t3_1qxczev
/r/LocalLLaMA/comments/1qxczev/best_online_service_to_upload_custom_text/
false
false
self
0
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]}
Best online service to upload custom text models/LLMs to? (NOT a raw VPS/GPU rental service)
1
[removed]
2026-02-06T09:07:17
https://www.reddit.com/r/LocalLLaMA/comments/1qxcsic/best_online_service_to_upload_custom_text/
xMicro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxcsic
false
null
t3_1qxcsic
/r/LocalLLaMA/comments/1qxcsic/best_online_service_to_upload_custom_text/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]}
LOCAL AI on mobile phone
0
if you're finding like LM studio in ur mobile phone device or tablet without needed to download from ollama I'll introducing secret AI app the secret AI app like LM studio but in mobile version you can show your video or picture wat waiting for download now.
2026-02-06T09:01:09
https://play.google.com/store/apps/details?id=io.secretai.llm
Adventurous_Role_489
play.google.com
1970-01-01T00:00:00
0
{}
1qxcp1c
false
null
t3_1qxcp1c
/r/LocalLLaMA/comments/1qxcp1c/local_ai_on_mobile_phone/
false
false
https://external-preview…af1a785d26076b5f
0
{'enabled': False, 'images': [{'id': 'RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?width=108&crop=smart&auto=webp&s=c0155fc2b3bca0cb00d73df67896dfe09a84ab2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?width=216&crop=smart&auto=webp&s=72472aa9e5210463df8d02ba3566482b22942d44', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?width=320&crop=smart&auto=webp&s=bfe538a04d55eb923f20d8db1418394304552418', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?auto=webp&s=e99a4780953c945c0feeda9d40a40a58e7b8f04f', 'width': 512}, 'variants': {}}]}
$50 for everybody that has a Claude subscription! Settings > Usage > Claim
0
Just noticed this in my dashboard and wanted to share before they potentially pull it back. If you are a subscriber, check your **Settings > Usage** tab. There should be a "Claim" button for $50 in API credits. **The Context:** This seems to be a push for the newly released **Opus 4.6**. Anthropic likely wants to flood the zone with usage data and get people testing the new capabilities immediately without worrying about the API costs. Go grab it.
2026-02-06T08:57:48
https://www.reddit.com/r/LocalLLaMA/comments/1qxcn14/50_for_everybody_that_has_a_claude_subscription/
AykutSek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxcn14
false
null
t3_1qxcn14
/r/LocalLLaMA/comments/1qxcn14/50_for_everybody_that_has_a_claude_subscription/
false
false
self
0
null
No NVIDIA? No Problem. My 2018 "Potato" 8th Gen i3 hits 10 TPS on 16B MoE.
897
I’m writing this from Burma. Out here, we can’t all afford the latest NVIDIA 4090s or high-end MacBooks. If you have a tight budget, corporate AI like ChatGPT will try to gatekeep you. If you ask it if you can run a 16B model on an old dual-core i3, it’ll tell you it’s "impossible." I spent a month figuring out how to prove them wrong. After 30 days of squeezing every drop of performance out of my hardware, I found the peak. I’m running DeepSeek-Coder-V2-Lite (16B MoE) on an HP ProBook 650 G5 (i3-8145U, 16GB Dual-Channel RAM) at near-human reading speeds. \#### The Battle: CPU vs iGPU I ran a 20-question head-to-head test with no token limits and real-time streaming. | Device | Average Speed | Peak Speed | My Rating | | --- | --- | --- | --- | | CPU | 8.59 t/s | 9.26 t/s | 8.5/10 - Snappy and solid logic. | | iGPU (UHD 620) | 8.99 t/s | 9.73 t/s | 9.0/10 - A beast once it warms up. | The Result: The iGPU (OpenVINO) is the winner, proving that even integrated Intel graphics can handle heavy lifting if you set it up right. \#### How I Squeezed the Performance: \* MoE is the "Cheat Code": 16B parameters sounds huge, but it only calculates 2.4B per token. It’s faster and smarter than 3B-4B dense models. \* Dual-Channel is Mandatory: I’m running 16GB (2x8GB). If you have single-channel, don't even bother; your bandwidth will choke. \* Linux is King: I did this on Ubuntu. Windows background processes are a luxury my "potato" can't afford. \* OpenVINO Integration: Don't use OpenVINO alone—it's dependency hell. Use it as a backend for llama-cpp-python. \#### The Reality Check 1. First-Run Lag: The iGPU takes time to compile. It might look stuck. Give it a minute—the "GPU" is just having his coffee. 2. Language Drift: On iGPU, it sometimes slips into Chinese tokens, but the logic never breaks. I’m sharing this because you shouldn't let a lack of money stop you from learning AI. If I can do this on an i3 in Burma, you can do it too.
2026-02-06T08:56:17
https://www.reddit.com/gallery/1qxcm5g
RelativeOperation483
reddit.com
1970-01-01T00:00:00
0
{}
1qxcm5g
false
null
t3_1qxcm5g
/r/LocalLLaMA/comments/1qxcm5g/no_nvidia_no_problem_my_2018_potato_8th_gen_i3/
false
false
https://b.thumbs.redditm…e4NDcZO0YLwQ.jpg
897
null
Mitchell Hashimoto (author of Ghostty): My AI Adoption Journey
16
2026-02-06T08:50:21
https://mitchellh.com/writing/my-ai-adoption-journey
hedgehog0
mitchellh.com
1970-01-01T00:00:00
0
{}
1qxciqr
false
null
t3_1qxciqr
/r/LocalLLaMA/comments/1qxciqr/mitchell_hashimoto_author_of_ghostty_my_ai/
false
false
default
16
null
[Benchmark] No NVIDIA? No Problem. My 2018 "Potato" i3 hits 10 TPS on 16B MoE.
1
[removed]
2026-02-06T08:45:24
https://www.reddit.com/gallery/1qxcfyz
esterzollar
reddit.com
1970-01-01T00:00:00
0
{}
1qxcfyz
false
null
t3_1qxcfyz
/r/LocalLLaMA/comments/1qxcfyz/benchmark_no_nvidia_no_problem_my_2018_potato_i3/
false
false
https://b.thumbs.redditm…KwbjJ2NiRGjM.jpg
1
null
[Benchmark] No NVIDIA? No Problem. My 2018 "Potato" 8th Gen i3 hits 10 TPS on 16B MoE.
1
[removed]
2026-02-06T08:43:40
https://www.reddit.com/gallery/1qxceyf
esterzollar
reddit.com
1970-01-01T00:00:00
0
{}
1qxceyf
false
null
t3_1qxceyf
/r/LocalLLaMA/comments/1qxceyf/benchmark_no_nvidia_no_problem_my_2018_potato_8th/
false
false
https://b.thumbs.redditm…_uio6nD50LuY.jpg
1
null
Do you find AI memory features actually helpful?
8
I've tried using them but find them confusing and opaque. Instead, I'm experimenting with a simpler approach using .md files: * Keep a file with important info and rules * Explicitly reference it at conversation start * Update it manually when needed This feels more reliable because: * I know exactly what's in context * No mystery "remembering" of things I forgot I mentioned * Easier to debug when the AI behaves weirdly * No token bloat from accumulated junk The tradeoff is more manual work, but I'm wondering if that's actually better than hoping the memory system captured the right stuff. What's your experience? Do you use memory features religiously, avoid them, or handle context differently?
2026-02-06T08:41:48
https://www.reddit.com/r/LocalLLaMA/comments/1qxcdx0/do_you_find_ai_memory_features_actually_helpful/
Deep_Traffic_7873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxcdx0
false
null
t3_1qxcdx0
/r/LocalLLaMA/comments/1qxcdx0/do_you_find_ai_memory_features_actually_helpful/
false
false
self
8
null
[Benchmark] No NVIDIA? No Problem. My 2018 "Potato" Gen 8th i3 hits ~10 TPS on 16B MoE.
1
[removed]
2026-02-06T08:34:40
https://www.reddit.com/gallery/1qxc9xf
PlasticLegitimate467
reddit.com
1970-01-01T00:00:00
0
{}
1qxc9xf
false
null
t3_1qxc9xf
/r/LocalLLaMA/comments/1qxc9xf/benchmark_no_nvidia_no_problem_my_2018_potato_gen/
false
false
https://preview.redd.it/…049b0f4db45e589e
1
null
"Minimum Buy-in" Build
100
Just finished putting this together. Supermicro x10drh One Radeon pro v340 on each 6 pcie 3.0 x8 slots. The only x16 slot is bifurcated to x8x4x4 for dual Nvme drives and another GPU down the line. But testing first for peak power. I have 15A 120v socket only.
2026-02-06T08:32:30
https://i.redd.it/exb6j45a5uhg1.jpeg
jmuff98
i.redd.it
1970-01-01T00:00:00
0
{}
1qxc8qj
false
null
t3_1qxc8qj
/r/LocalLLaMA/comments/1qxc8qj/minimum_buyin_build/
false
false
https://b.thumbs.redditm…I1bXUX1t-grU.jpg
100
{'enabled': True, 'images': [{'id': 'tSlWwXxgXHx7_Jn07uvWTe1OaF8UH71LL5UKSsfx6eg', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?width=108&crop=smart&auto=webp&s=7d77e62838ddc0db7064780c89928d058e3163a1', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?width=216&crop=smart&auto=webp&s=2aa1eae0f4b4c96a2b620b90e12f2697b390555c', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?width=320&crop=smart&auto=webp&s=e651188e58f289755f91ab71e5f6b513d0e5ffca', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?width=640&crop=smart&auto=webp&s=d0ebdf0ecbf44394a813508836f918d0e6781d0b', 'width': 640}, {'height': 722, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?width=960&crop=smart&auto=webp&s=f21bf6da1ef1c164914e85994c08352c96055f3c', 'width': 960}, {'height': 813, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?width=1080&crop=smart&auto=webp&s=33f3dca1c36e985954171117da5dcc823de68e80', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/exb6j45a5uhg1.jpeg?auto=webp&s=26718ba12241dd95b6aabf6f8ae67fe30279aedb', 'width': 4080}, 'variants': {}}]}
SenseTime just open-sourced SenseNova-SI 1.3, the latest model that scales on Spatial Intelligence.
3
On the [EASI](https://x.com/hashtag/EASI?src=hashtag_click) leaderboard, it ranks No.1 overall under EASI-8, outperforming [Gemini3](https://x.com/hashtag/Gemini3?src=hashtag_click) in average performance across eight spatial intelligence benchmarks. From safer [AutonomousDriving](https://x.com/hashtag/AutonomousDriving?src=hashtag_click) in complex environments to smarter home [robots](https://x.com/hashtag/robots?src=hashtag_click) , SenseNova-SI 1.3 accelerates and broadens deployment opportunities across enterprise and consumer applications. https://preview.redd.it/k8k5bour1uhg1.jpg?width=900&format=pjpg&auto=webp&s=c99d843fed6b6791c2c2aada730257a273299b7c Open-Source Resources: [SenseNova-SI - a sensenova Collection](https://huggingface.co/collections/sensenova/sensenova-si) SenseNova-SI Code: [OpenSenseNova/SenseNova-SI: Scaling Spatial Intelligence with Multimodal Foundation Models](https://github.com/OpenSenseNova/SenseNova-SI)
2026-02-06T08:15:51
https://www.reddit.com/r/LocalLLaMA/comments/1qxbz2u/sensetime_just_opensourced_sensenovasi_13_the/
Soggy_Mission3372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbz2u
false
null
t3_1qxbz2u
/r/LocalLLaMA/comments/1qxbz2u/sensetime_just_opensourced_sensenovasi_13_the/
false
false
https://a.thumbs.redditm…SfRgcEJez1h4.jpg
3
null
[Benchmark] No NVIDIA? No Problem. My 2018 "Potato" i3 hits 10 TPS on 16B MoE.
1
[removed]
2026-02-06T08:13:16
https://www.reddit.com/gallery/1qxbxlm
esteinhaven
reddit.com
1970-01-01T00:00:00
0
{}
1qxbxlm
false
null
t3_1qxbxlm
/r/LocalLLaMA/comments/1qxbxlm/benchmark_no_nvidia_no_problem_my_2018_potato_i3/
false
false
https://preview.redd.it/…cc6b0ffee14ef3f5
1
null
Running Kimi-k2.5 on CPU-only: EPYC 9175F Benchmarks & "Sweet Spot" Analysis
1
[removed]
2026-02-06T08:10:43
https://www.reddit.com/r/LocalLLaMA/comments/1qxbw42/running_kimik25_on_cpuonly_epyc_9175f_benchmarks/
Own-Astronaut6098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbw42
false
null
t3_1qxbw42
/r/LocalLLaMA/comments/1qxbw42/running_kimik25_on_cpuonly_epyc_9175f_benchmarks/
false
false
self
1
null
[Benchmark] No NVIDIA? No Problem. My 2018 "Potato" i3 hits 10 TPS on 16B MoE.
1
[removed]
2026-02-06T08:07:08
https://www.reddit.com/gallery/1qxbu0m
esterzollar
reddit.com
1970-01-01T00:00:00
0
{}
1qxbu0m
false
null
t3_1qxbu0m
/r/LocalLLaMA/comments/1qxbu0m/benchmark_no_nvidia_no_problem_my_2018_potato_i3/
false
false
https://a.thumbs.redditm…lckd7mxsb930.jpg
1
null
Seeking help: What is the best way to detect hallucinations
1
[removed]
2026-02-06T08:01:41
https://www.reddit.com/r/LocalLLaMA/comments/1qxbqsc/seeking_help_what_is_the_best_way_to_detect/
EffectiveDisk2293
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbqsc
false
null
t3_1qxbqsc
/r/LocalLLaMA/comments/1qxbqsc/seeking_help_what_is_the_best_way_to_detect/
true
false
self
1
null
I generated a 5k Process Reward Model (PRM) dataset for Math Reasoning using DeepSeek-V3.1
1
I’ve built a pipeline to generate DeepStep-Math-5K. Unlike standard SFT datasets, this focus on Process Reward Modeling. The Methodology: 1. Problem Gen: Elite competition math (AIME/IMO style). 2. Solver: 16 independent solution paths sampled at T=0.7. 3. Consensus: Answers only verified if ≥ 5 agents reached the same deterministic value. 4. Audit: Negative chains were audited by a Critic model to find the "Pivot Point"—the exact step where the logic or calculation first broke. The dataset includes step\_labels like \[1, 1, 0, 0\] so you can see exactly where the model hallucinated. [https://huggingface.co/datasets/BlackSnowDot/DeepStep-Math-5K](https://huggingface.co/datasets/BlackSnowDot/DeepStep-Math-5K)
2026-02-06T08:00:53
https://www.reddit.com/r/LocalLLaMA/comments/1qxbq8a/i_generated_a_5k_process_reward_model_prm_dataset/
BlackSnowDoto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbq8a
false
null
t3_1qxbq8a
/r/LocalLLaMA/comments/1qxbq8a/i_generated_a_5k_process_reward_model_prm_dataset/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?width=108&crop=smart&auto=webp&s=0cdc2285759564cff6f98f580f9a0378ac44b2b1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?width=216&crop=smart&auto=webp&s=7dcb447f2355336e4f3a955319a1b377ea74c150', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?width=320&crop=smart&auto=webp&s=0d3f494c048f79b658c21f8b95a53843d6895c60', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?width=640&crop=smart&auto=webp&s=80c6f3212b1ea6e2c9e7d8d7f23034b533b36ad4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?width=960&crop=smart&auto=webp&s=05b91505108f86c7cb85e098a590983a7ee8c47c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?width=1080&crop=smart&auto=webp&s=062e6902d7cfc9dad9348b4a4785a1fe300d2e48', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Tw4pZa3I-EaVE2qDHV4JEqq064T7EQmSyx0_EtuhN8U.png?auto=webp&s=e65fee5cc27186fd1bf0a98331cc84ab595bed89', 'width': 1200}, 'variants': {}}]}
Weird question: Which reasoning LLM produces the most interesting/coherent "thoughts"?
1
Basically, which LLM's internal monologue is the most entertaining to read? I'm trying to set up a thing for myself where I make an LLM play characters in social deduction-esque scenarios so I can watch them spout Death Note style internal monologues. When I ask Qwen 3 something, its reasoning output is usually very long and contains a lot of weird and unnecessary tangents as well as just straight up incorrect statements, even if its final answer is coherent. This is not ideal for my purposes. I was wondering if I used some other reasoning LLM trained with a different strategy, they could have much better "internal monologues". Instead of trying out every option out there, I am asking the community. I'm looking for models 10B or under, but discussion about larger models are welcome. If there aren't any good options, I might just prompt Qwen 3 8B Instruct to generate internal monologues explicitly. Hopefully it doesn't come to that though.
2026-02-06T07:58:42
https://www.reddit.com/r/LocalLLaMA/comments/1qxbovj/weird_question_which_reasoning_llm_produces_the/
airbus_a360_when
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbovj
false
null
t3_1qxbovj
/r/LocalLLaMA/comments/1qxbovj/weird_question_which_reasoning_llm_produces_the/
false
false
self
1
null
Running Kimi-k2.5 on CPU-only: EPYC 9175F Benchmarks & "Sweet Spot" Analysis
1
[removed]
2026-02-06T07:54:42
https://www.reddit.com/r/LocalLLaMA/comments/1qxbmih/running_kimik25_on_cpuonly_epyc_9175f_benchmarks/
Own-Astronaut6098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbmih
false
null
t3_1qxbmih
/r/LocalLLaMA/comments/1qxbmih/running_kimik25_on_cpuonly_epyc_9175f_benchmarks/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hnPusiPOm9pGzh-2qMkms_nsvw5o7Ih2O4KFqVDdCDQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hnPusiPOm9pGzh-2qMkms_nsvw5o7Ih2O4KFqVDdCDQ.jpeg?width=108&crop=smart&auto=webp&s=ce24825b889e56bc3da45dc880985dba59ac3adb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hnPusiPOm9pGzh-2qMkms_nsvw5o7Ih2O4KFqVDdCDQ.jpeg?width=216&crop=smart&auto=webp&s=0e91e8d74fde37772cf2f3575c0fdd106329835e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hnPusiPOm9pGzh-2qMkms_nsvw5o7Ih2O4KFqVDdCDQ.jpeg?width=320&crop=smart&auto=webp&s=00cfd77116b18f8b9c2cca083f9d86a7c74f43f8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hnPusiPOm9pGzh-2qMkms_nsvw5o7Ih2O4KFqVDdCDQ.jpeg?auto=webp&s=37c617fd7a2a1430bbe502075742a51aad9fb50e', 'width': 480}, 'variants': {}}]}
Kimi K2.5 on 4x RTX 6000 Pro Blackwell runpod Benchmarks
13
I wanted to test the performance of Kimi K2.5 (mainly TTFT and Tok/s) on a Setup with 4x RTX 6000 Pro Blackwell. So I rented a system on runpod (for \~7$ per hour). Problem is I am a absolute beginner in Terms of Local LLMs. I figured that SGLang with KT-Kernel seem to be a good way for performance, if the entire model does not fit into VRAM. My whole command line looks like this: ``` python3 -m sglang.launch_server \ --host 0.0.0.0 \ --port 8090 \ --model /workspace/models/Kimi-K2.5 \ --tp-size 4 \ --kt-weight-path /workspace/models/Kimi-K2.5 \ --kt-cpuinfer 128 \ --kt-threadpool-count 2 \ --kt-num-gpu-experts 180 \ --kt-method RAWINT4 \ --kt-gpu-prefill-token-threshold 2048 \ --mem-fraction-static 0.85 \ --trust-remote-code \ --served-model-name Kimi-K2.5 \ --reasoning-parser kimi_k2 \ --tool-call-parser kimi_k2 \ --enable-mixed-chunk \ --attention-backend flashinfer \ --context-length 131072 \ --max-total-tokens 150000 \ --enable-p2p-check ``` Here are benchmark results with diffferent parameters: ``` python3 -m sglang.bench_serving --host 127.0.0.1 --port 8090 --dataset-name sharegpt --num-prompts 100 Kimi-K2.5 4x RTX 6000 PRO --mem-fraction-static 0.90 --kt-num-gpu-experts 20 --kt-gpu-prefill-token-threshold 1000 ============ Serving Benchmark Result ============ Backend: sglang Traffic request rate: inf Max request concurrency: not set Successful requests: 100 Benchmark duration (s): 797.57 Total input tokens: 33147 Total input text tokens: 33147 Total generated tokens: 21350 Total generated tokens (retokenized): 21343 Request throughput (req/s): 0.13 Input token throughput (tok/s): 41.56 Output token throughput (tok/s): 26.77 Peak output token throughput (tok/s): 99.00 Peak concurrent requests: 100 Total token throughput (tok/s): 68.33 Concurrency: 40.28 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 321229.26 Median E2E Latency (ms): 302115.02 P90 E2E Latency (ms): 649477.80 P99 E2E Latency (ms): 734740.50 ---------------Time to First Token---------------- Mean TTFT (ms): 43683.46 Median TTFT (ms): 39622.10 P99 TTFT (ms): 63386.48 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 2308.10 Median TPOT (ms): 1744.01 P99 TPOT (ms): 7974.68 ---------------Inter-Token Latency---------------- Mean ITL (ms): 1306.10 Median ITL (ms): 1376.37 P95 ITL (ms): 1999.40 P99 ITL (ms): 5206.45 Max ITL (ms): 12761.78 ================================================== Kimi-K2.5 4x RTX 6000 PRO --mem-fraction-static 0.80 --kt-num-gpu-experts 64 --kt-gpu-prefill-token-threshold 2048 ============ Serving Benchmark Result ============ Backend: sglang Traffic request rate: inf Max request concurrency: not set Successful requests: 100 Benchmark duration (s): 720.88 Total input tokens: 33147 Total input text tokens: 33147 Total generated tokens: 21350 Total generated tokens (retokenized): 21345 Request throughput (req/s): 0.14 Input token throughput (tok/s): 45.98 Output token throughput (tok/s): 29.62 Peak output token throughput (tok/s): 99.00 Peak concurrent requests: 100 Total token throughput (tok/s): 75.60 Concurrency: 42.07 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 303249.40 Median E2E Latency (ms): 285529.22 P90 E2E Latency (ms): 593663.77 P99 E2E Latency (ms): 666586.61 ---------------Time to First Token---------------- Mean TTFT (ms): 49258.67 Median TTFT (ms): 44937.76 P99 TTFT (ms): 68691.17 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 2227.62 Median TPOT (ms): 1599.91 P99 TPOT (ms): 7969.61 ---------------Inter-Token Latency---------------- Mean ITL (ms): 1195.25 Median ITL (ms): 1293.28 P95 ITL (ms): 2125.91 P99 ITL (ms): 5073.84 Max ITL (ms): 13245.65 ================================================== Kimi-K2.5 4x RTX 6000 PRO --mem-fraction-static 0.85 --kt-num-gpu-experts 180 --kt-gpu-prefill-token-threshold 2048 ============ Serving Benchmark Result ============ Backend: sglang Traffic request rate: inf Max request concurrency: not set Successful requests: 100 Benchmark duration (s): 569.87 Total input tokens: 33147 Total input text tokens: 33147 Total generated tokens: 21350 Total generated tokens (retokenized): 21346 Request throughput (req/s): 0.18 Input token throughput (tok/s): 58.17 Output token throughput (tok/s): 37.46 Peak output token throughput (tok/s): 123.00 Peak concurrent requests: 100 Total token throughput (tok/s): 95.63 Concurrency: 44.35 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 252740.99 Median E2E Latency (ms): 240023.88 P90 E2E Latency (ms): 448283.65 P99 E2E Latency (ms): 505817.34 ---------------Time to First Token---------------- Mean TTFT (ms): 75851.65 Median TTFT (ms): 70053.38 P99 TTFT (ms): 99228.64 -----Time per Output Token (excl. 1st token)------ Mean TPOT (ms): 1908.22 Median TPOT (ms): 1081.44 P99 TPOT (ms): 9853.65 ---------------Inter-Token Latency---------------- Mean ITL (ms): 832.42 Median ITL (ms): 774.26 P95 ITL (ms): 1237.89 P99 ITL (ms): 2973.36 Max ITL (ms): 22928.28 ================================================== ``` Do you have any suggestions on how to tweak this better? If you are asking yourself why I am testing this o 4x RTX 6000 Pro Bw? I want to buy a Dell Precision7960 Tower Workstation with that Setup to run large Models like Kimi K2.5. It cost around 90k €.
2026-02-06T07:52:29
https://www.reddit.com/r/LocalLLaMA/comments/1qxbl7j/kimi_k25_on_4x_rtx_6000_pro_blackwell_runpod/
skysthelimit187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qxbl7j
false
null
t3_1qxbl7j
/r/LocalLLaMA/comments/1qxbl7j/kimi_k25_on_4x_rtx_6000_pro_blackwell_runpod/
false
false
self
13
null