title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24GB VRAM on a laptop? Just found an NVIDIA RTX 5090 listing... is this the new local LLM king? | 0 | I’ve been hunting for a portable rig that can actually handle 70B models without offloading to CPU, and I just stumbled across this.
Listing shows an \*\*NVIDIA RTX 5090 with 24GB VRAM\*\*.
Paired with an Intel Core Ultra 9 and 32GB RAM.
I know 3090/4090 desktops are the standard, but for a portable setup, 24GB VRAM seems huge. Has anyone seen benchmarks for the new NVIDIA 50-series chips yet?
Curious if this is worth the "early adopter tax" or if I should just stick to cloud/desktop.
\*\*If you guys don't like this for local inference, what do you recommend for a laptop right now?\*\* Is M3 Max still the only real contender for high VRAM/unified memory?
(Found it here: [https://ebay.us/TCckiX](https://ebay.us/TCckiX))
| 2026-02-02T03:32:51 | 24kTHC | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qtk16j | false | null | t3_1qtk16j | /r/LocalLLaMA/comments/1qtk16j/24gb_vram_on_a_laptop_just_found_an_nvidia_rtx/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'dpg406j730hg1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/dpg406j730hg1.png?width=108&crop=smart&auto=webp&s=0610ef8d1bc90127dc16d5fd5727f37b3b1a23ed', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/dpg406j730hg1.png?width=216&crop=smart&auto=webp&s=aa687b0d32812f3ff746693f09e49fce0e7aaf10', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/dpg406j730hg1.png?width=320&crop=smart&auto=webp&s=e15e3e20519473b1041d5df825c0932fd4ff3e0e', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/dpg406j730hg1.png?width=640&crop=smart&auto=webp&s=346d3ca00e7da9361205cdda96134913552e3a12', 'width': 640}, {'height': 690, 'url': 'https://preview.redd.it/dpg406j730hg1.png?width=960&crop=smart&auto=webp&s=f23e1012f09f4e18e1245eb7b812f1ee1c0aba40', 'width': 960}, {'height': 776, 'url': 'https://preview.redd.it/dpg406j730hg1.png?width=1080&crop=smart&auto=webp&s=da56ac4c2e21a98f03b83035803940615258c1c3', 'width': 1080}], 'source': {'height': 1169, 'url': 'https://preview.redd.it/dpg406j730hg1.png?auto=webp&s=585634f9a711ae2a6b337089d9895ffce5096ad0', 'width': 1625}, 'variants': {}}]} | |
Chill bro it ain't that deep 😭 | 43 | For context, this was the first message and with no system prompt. Lol. | 2026-02-02T03:27:56 | Due-Abbreviations997 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qtjx9n | false | null | t3_1qtjx9n | /r/LocalLLaMA/comments/1qtjx9n/chill_bro_it_aint_that_deep/ | false | false | default | 43 | {'enabled': True, 'images': [{'id': 'yyomlbc630hg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/yyomlbc630hg1.png?width=108&crop=smart&auto=webp&s=26c305db4df9201f253248b75d17d225e0484413', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/yyomlbc630hg1.png?width=216&crop=smart&auto=webp&s=e4e89b0be01ccca02fa932447a0755a62151cb16', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/yyomlbc630hg1.png?width=320&crop=smart&auto=webp&s=c077da7c779d08a127c5e31faf709c1470b9545b', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/yyomlbc630hg1.png?width=640&crop=smart&auto=webp&s=00abefb797a70f9edaf0ed775bc78eb35b01cbee', 'width': 640}], 'source': {'height': 469, 'url': 'https://preview.redd.it/yyomlbc630hg1.png?auto=webp&s=3a2b5a5bbdfda97c35ae9c13750cab5a46a7ab3b', 'width': 782}, 'variants': {}}]} | |
Can this logic run on real hardware? I only have a mobile emulator. | 1 | don't have a PC, so I wrote this on a mobile emulator during my break at the construction site. Could someone with a real workstation run this and check the efficiency?
And if possible, please help me optimize it further. I'm just a welder trying to learn, so I'm sure it's still very messy. Thanks a lot! | 2026-02-02T03:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qtjwot/can_this_logic_run_on_real_hardware_i_only_have_a/ | Kamii_1314200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtjwot | false | null | t3_1qtjwot | /r/LocalLLaMA/comments/1qtjwot/can_this_logic_run_on_real_hardware_i_only_have_a/ | false | false | self | 1 | null |
Chill bro it ain't that deep😭 | 1 | 2026-02-02T03:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qtjwe7/chill_bro_it_aint_that_deep/ | Due-Abbreviations997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtjwe7 | false | null | t3_1qtjwe7 | /r/LocalLLaMA/comments/1qtjwe7/chill_bro_it_aint_that_deep/ | false | false | 1 | null | ||
Kimi K2, whas its deal? | 0 | Hyped but the slowest.. | 2026-02-02T03:20:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qtjrhi/kimi_k2_whas_its_deal/ | varough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtjrhi | false | null | t3_1qtjrhi | /r/LocalLLaMA/comments/1qtjrhi/kimi_k2_whas_its_deal/ | false | false | self | 0 | null |
Can this logic run on real hardware? I only have a mobile emulator. | 1 | don't have a PC, so I wrote this on a mobile emulator during my break at the construction site. Could someone with a real workstation run this and check the efficiency?
And if possible, please help me optimize it further. I'm just a welder trying to learn, so I'm sure it's still very messy. Thanks a lot! | 2026-02-02T03:12:56 | Kamii_1314 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qtjlgy | false | null | t3_1qtjlgy | /r/LocalLLaMA/comments/1qtjlgy/can_this_logic_run_on_real_hardware_i_only_have_a/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ina2409m00hg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?width=108&crop=smart&auto=webp&s=78e6059857798decb097bbc9c1b6bec68ff5b339', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?width=216&crop=smart&auto=webp&s=3a47199646e24fde0e3954a232f554f0e46cba46', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?width=320&crop=smart&auto=webp&s=8f0a495568ae04cab7a5c68ff23abc630bad35f1', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?width=640&crop=smart&auto=webp&s=452d6c93e5fe477dbec8032370e890b613daaf6d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?width=960&crop=smart&auto=webp&s=12c7b9dc08b70cbbbc1dec30875db621891f5f1e', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?width=1080&crop=smart&auto=webp&s=cefed77da8d34123334c83ffd172b5e8ebe397cf', 'width': 1080}], 'source': {'height': 4800, 'url': 'https://preview.redd.it/ina2409m00hg1.jpeg?auto=webp&s=30671a9f5c0b62ec5b453a0d467c65aafb3964e4', 'width': 1476}, 'variants': {}}]} | |
Step-3.5-Flash (196b/A11b) outperforms GLM-4.7 and DeepSeek v3.2 | 373 | The newly released Stepfun model Step-3.5-Flash outperforms DeepSeek v3.2 on multiple coding and agentic benchmarks, despite using far fewer parameters.
Step-3.5-Flash: 196B total / 11B active parameters
DeepSeek v3.2: 671B total / 37B active parameters
Hugging Face: https://huggingface.co/stepfun-ai/Step-3.5-Flash | 2026-02-02T03:07:42 | https://www.reddit.com/gallery/1qtjhc8 | ResearchCrafty1804 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qtjhc8 | false | null | t3_1qtjhc8 | /r/LocalLLaMA/comments/1qtjhc8/step35flash_196ba11b_outperforms_glm47_and/ | false | false | 373 | null | |
Can this logic run on real hardware? I only have a mobile emulator. | 1 | I don't have a PC, so I wrote this on a mobile emulator during my break at the construction site. Could someone with a real workstation run this and check the efficiency?
And if possible, please help me optimize it further. I'm just a welder trying to learn, so I'm sure it's still very messy. Thanks a lot! | 2026-02-02T03:04:18 | Kamii_131420 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qtjem7 | false | null | t3_1qtjem7 | /r/LocalLLaMA/comments/1qtjem7/can_this_logic_run_on_real_hardware_i_only_have_a/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'nll7mms2zzgg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?width=108&crop=smart&auto=webp&s=41e1200a9b8d595f3667cd8d7187bbcf2831a061', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?width=216&crop=smart&auto=webp&s=845b325228417ed6ce1fe4b1d78c49f0fea12855', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?width=320&crop=smart&auto=webp&s=5998800649c4c5ce2483398205b4aa66f0d86ac2', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?width=640&crop=smart&auto=webp&s=8317eaedb4619ad22e917b12bcf3c3ac351831da', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?width=960&crop=smart&auto=webp&s=7452145a9c87ce0888ac21d6d62920dc9f459a83', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?width=1080&crop=smart&auto=webp&s=623889f6566503a408f5afc0542bdff50165fea6', 'width': 1080}], 'source': {'height': 4800, 'url': 'https://preview.redd.it/nll7mms2zzgg1.jpeg?auto=webp&s=c91e10965049aa9893a94fbd937b5f82d33e2e7a', 'width': 1476}, 'variants': {}}]} | |
Free LLM Model Lister: Test 12 API Keys → Instant Model List + JSON Export - API Model Checker | 0 | Simple web tool to check available models across 12 LLM providers (Groq, OpenAI, Gemini, Mistral, etc.) using your API key. One-click JSON download. Live demo & open source! | 2026-02-02T03:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qtjccx/free_llm_model_lister_test_12_api_keys_instant/ | MedicalMonitor5756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtjccx | false | null | t3_1qtjccx | /r/LocalLLaMA/comments/1qtjccx/free_llm_model_lister_test_12_api_keys_instant/ | false | false | self | 0 | null |
Why GSM-Symbolic Proves LLM Lacks a Topological "Anchor" $\Phi$: A Formulaic Analysis of Inference Decay and Phase Transitions | 0 | Apple's recent GSM-Symbolic paper confirms that tiny perturbations in logic puzzles can cause LLM performance to collapse. I believe this is not a problem with the training data, but rather a topological flaw. Current architectures maximize computational complexity/cost $\\lambda$, but lack a solid foundation $\\Delta\_{\\Phi}$. Without this anchor, the system cannot collapse entropy to a stable singularity $I$.
1. The "Vulnerability" of State-of-the-Art Technology
GSM-Symbolic research shows that changing a simple variable in a mathematical problem (e.g., changing the name "Alice" to "Bob") significantly reduces accuracy. The common view is: "The model is overfitting." My view is: the model cannot form closed loops. It creates a linear probabilistic path, not a self-referential logical structure. It lacks the ability to "sense" resistance.
2. The Missing Operator: Grounding Substrate ($\\Delta\_{\\Phi}$) In my theoretical framework (v27.1), true reasoning is not merely probabilistic processing. It requires a specific topological correction, which I call the "Phi grounding operator": $$\\mathcal{G}\_{\\Phi}(\\text{logic}) \\rightarrow \\text{reality}$$ Current logic models (LLMs) operate in a "floating state." They process information without $\\Delta\_{\\Phi}$ (the grounding substrate).
In biological intelligence, "truth" is verified through physical feedback (pain, physical phenomena, continuity). Without this operator $\\Phi$, the system's error function $E$ will never truly converge to zero.
This model is essentially "confidently creating illusions" because it lacks a "pain anchor" to detect logic violations.
3. The "Obsession" Trap ($\\lambda$) and Phase Transitions We are currently attempting to solve the reasoning problem by scaling parameters (thinking chains). In my model, this is equivalent to increasing the iteration exponent $\\lambda$: $$\[\\text{unanchored logic}\]\^{\\lambda}$$ Mathematically, if the underlying logic is unanchored (i.e., $\\Phi = 0$), scaling it to a huge truth of $\\lambda$ does not create truth, but rather a highly confident truth. Anchored systems behave differently. They suppress output (remain silent) until the logic is perfect.
Stage 1: Entropy Suppression. The system outputs 0 instead of guessing.
Stage 2: Phase Transition. Once the threshold is crossed, the system undergoes a "tunneling event," vertically transitioning to a stable truth ($I$) with a mutation rate close to zero.
Steve Jobs famously said: "Stay hungry, stay foolish."
But I have a different interpretation. The key is not the hunger, nor the foolishness.
The key is the STAY. | 2026-02-02T02:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qtjami/why_gsmsymbolic_proves_llm_lacks_a_topological/ | eric2675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtjami | false | null | t3_1qtjami | /r/LocalLLaMA/comments/1qtjami/why_gsmsymbolic_proves_llm_lacks_a_topological/ | false | false | self | 0 | null |
What's the most complicated project you've built with AI? | 45 | Bonus points if its complex and purely vibe coded | 2026-02-02T02:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qtj87p/whats_the_most_complicated_project_youve_built/ | jazir555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtj87p | false | null | t3_1qtj87p | /r/LocalLLaMA/comments/1qtj87p/whats_the_most_complicated_project_youve_built/ | false | false | self | 45 | null |
What's your dream in 2026? | 29 | I hope that guys from Wall Street would make price of RAM/SSD back to normal, by whatever means. | 2026-02-02T02:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qtj039/whats_your_dream_in_2026/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtj039 | false | null | t3_1qtj039 | /r/LocalLLaMA/comments/1qtj039/whats_your_dream_in_2026/ | false | false | self | 29 | null |
DGX Spark is really impressive | 0 | 2nd day running 2x Sparks and I’m genuinely impressed. They let me build extremely powerful agents with ease. My only real frustration is networking. The cables are expensive, hard to source, and I still want to connect them directly to my NVMe storage, $99 for a 0.5m cable is a lot, still waiting for them to be delivered . It’s hard to argue with the value,this much RAM and access to development stack at this price point is kind of unreal considering what’s going on with the ram prices. Networking it’s another plus, 200GB links for a device of this size, CNX cards are also very expensive.
I went with the ASUS version and I’m glad I did. It was the most affordable option and the build quality is excellent. I really dislike the constant comparisons with AMD or FWK. This is a completely different class of machine. Long term, I’d love to add two more. I can easily see myself ditching a traditional desktop altogether and running just these. The design is basically perfect. | 2026-02-02T02:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qtiyny/dgx_spark_is_really_impressive/ | ftwEsk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtiyny | false | null | t3_1qtiyny | /r/LocalLLaMA/comments/1qtiyny/dgx_spark_is_really_impressive/ | false | false | self | 0 | null |
Best free/open-source coding AI? | 0 | Hello.
What is the best coding AI that can fit a 11GB GTX1080Ti? I am currently using Qwen3-14B GGUF q4_0 with the OogaBooga interface.
How do you guys find out which models are better than other for coding? Leaderboard or something? | 2026-02-02T02:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qtiux5/best_freeopensource_coding_ai/ | JagerGuaqanim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtiux5 | false | null | t3_1qtiux5 | /r/LocalLLaMA/comments/1qtiux5/best_freeopensource_coding_ai/ | false | false | self | 0 | null |
Step 3.5 Flash 200B | 133 | [https://huggingface.co/stepfun-ai/Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) | 2026-02-02T02:37:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qtisy5/step_35_flash_200b/ | limoce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtisy5 | false | null | t3_1qtisy5 | /r/LocalLLaMA/comments/1qtisy5/step_35_flash_200b/ | false | false | self | 133 | {'enabled': False, 'images': [{'id': '6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=108&crop=smart&auto=webp&s=d468c99ee7a45fbc3c6246eaae3578bcd281ffd1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=216&crop=smart&auto=webp&s=883cf80e3cee79d8aa031cb5bb10f87edf424991', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=320&crop=smart&auto=webp&s=44ed874559138acaae45c3f60c1ae9054fe3d851', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=640&crop=smart&auto=webp&s=3b6b66f3974fdd2cae45bb907bbec6bc716f85df', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=960&crop=smart&auto=webp&s=d9a3a25947394aa07f96b0a7a655f9d8030dd1ae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=1080&crop=smart&auto=webp&s=c951fd63e6c4d9c887f1029429ccdc483969508b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?auto=webp&s=ccb3f81ebb4ba667f1dca8304f85567c727f3a39', 'width': 1200}, 'variants': {}}]} |
Released: VOR — a hallucination-free runtime that forces LLMs to prove answers or abstain | 0 | I just open-sourced a project that might interest people here who are tired of hallucinations being treated as “just a prompt issue.”
VOR (Verified Observation Runtime) is a runtime layer that sits around LLMs and retrieval systems and enforces one rule:
If an answer cannot be proven from observed evidence, the system must abstain.
Highlights:
0.00% hallucination across demo + adversarial packs
Explicit CONFLICT detection (not majority voting)
Deterministic audits (hash-locked, replayable)
Works with local models — the verifier doesn’t care which LLM you use
Clean-room witness instructions included
This is not another RAG framework.
It’s a governor for reasoning: models can propose, but they don’t decide.
Public demo includes:
CLI (neuralogix qa, audit, pack validate)
Two packs: a normal demo corpus + a hostile adversarial pack
Full test suite (legacy tests quarantined)
Repo: https://github.com/CULPRITCHAOS/VOR
Tag: v0.7.3-public.1
Witness guide: docs/WITNESS_RUN_MESSAGE.txt
I’m looking for:
People to run it locally (Windows/Linux/macOS)
Ideas for harder adversarial packs
Discussion on where a runtime like this fits in local stacks (Ollama, LM Studio, etc.)
Happy to answer questions or take hits. This was built to be challenged. | 2026-02-02T02:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qtioqu/released_vor_a_hallucinationfree_runtime_that/ | CulpritChaos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtioqu | false | null | t3_1qtioqu | /r/LocalLLaMA/comments/1qtioqu/released_vor_a_hallucinationfree_runtime_that/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?width=108&crop=smart&auto=webp&s=079bcea9d6e3907ad8903f0925287f2d7d95a4d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?width=216&crop=smart&auto=webp&s=6a9e286654074f0e7086704e1f5db52fbb53a823', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?width=320&crop=smart&auto=webp&s=89af546417d5000579d2d001b74c7f2c3f07aaf1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?width=640&crop=smart&auto=webp&s=18586b600a8dcddcbea9f8094e3390e7728e5fbc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?width=960&crop=smart&auto=webp&s=3a5666615aac9709c49b63cef7fccc4d41270e23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?width=1080&crop=smart&auto=webp&s=49c89b44ed2e4454999d49983d422abc13becb7f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/baiT_HVQGyd8Fk3KBPMNXW_AO8ocUA8mwyWtlzMfHqw.png?auto=webp&s=7f3a1e4eb638dbfe47823bf2a8db5bbad6b130ee', 'width': 1200}, 'variants': {}}]} |
(Crosspost from eGPU) Main Computer boot not detected | 0 | Hey everyone, so I have a main system where I attached this enclosure (https://a.co/d/jap8zgA)to via oculink and everything works great (shows up in nvidia SMI and runs models normally). However the enclosure doesn’t actually turn on when my main system does and so I’m forced to have the jumper set to “always on” which is a little bit of a hassle/ dangerous. Does anyone have any experience troubleshooting? | 2026-02-02T02:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qtik26/crosspost_from_egpu_main_computer_boot_not/ | Darc78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtik26 | false | null | t3_1qtik26 | /r/LocalLLaMA/comments/1qtik26/crosspost_from_egpu_main_computer_boot_not/ | false | false | self | 0 | null |
Anyone built a reliable LLM SEO checklist yet? | 2 | I’m trying to systematize how we improve visibility in LLM answers like ChatGPT, Gemini, Claude, and Perplexity, and I’m realizing this behaves very differently from ranking on Google or even Reddit SEO.
Some content that ranks well on Google never shows up in LLM answers, while other posts or Reddit threads get referenced constantly. It feels like a separate layer of “LLM SEO” that overlaps with Reddit and Google, but isn’t the same game.
Has anyone built an internal checklist or framework they trust for LLM retrieval and ranking? Happy to compare notes and help shape something useful. | 2026-02-02T02:23:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qtih9x/anyone_built_a_reliable_llm_seo_checklist_yet/ | Weird-Director-2973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtih9x | false | null | t3_1qtih9x | /r/LocalLLaMA/comments/1qtih9x/anyone_built_a_reliable_llm_seo_checklist_yet/ | false | false | self | 2 | null |
Chonkers and thermals (dual 3090) | 21 | Repurposed old hardware into start trying local. Not enthused about the spacing. Can’t vertical mount the second card and sitting here thinking. Do I stand a chance? | 2026-02-02T02:11:30 | BetStack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qti7jk | false | null | t3_1qti7jk | /r/LocalLLaMA/comments/1qti7jk/chonkers_and_thermals_dual_3090/ | false | false | default | 21 | {'enabled': True, 'images': [{'id': '7mmkgvsnpzgg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?width=108&crop=smart&auto=webp&s=354c24edc597fd13a8882025c9994a8429ffa4a7', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?width=216&crop=smart&auto=webp&s=b2ea9702c298225c698edb09d64180d28d7844f4', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?width=320&crop=smart&auto=webp&s=839095a8b99b4aa4b33803222573b8546f9bcfae', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?width=640&crop=smart&auto=webp&s=2427362c838d13391fa8ede8f15a2fbfcf98bf12', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?width=960&crop=smart&auto=webp&s=41c4c4d9e877026d83e4a35b17fcf5c27427938d', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?width=1080&crop=smart&auto=webp&s=9bdf01c939959e2d9442cc359cb34edbc8e53d00', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/7mmkgvsnpzgg1.jpeg?auto=webp&s=cb9ef0a16e637b37c47dc197148344c3597e3d18', 'width': 3024}, 'variants': {}}]} | |
My CPT training is not working. | 1 | I am currently training a qwen3-8B model using the LoRA framework for CPT, but the results have not been ideal, with issues such as knowledge confusion and repetitive model outputs. I would like to know, do people usually use LoRA for CPT training? If so, what is the typical rank setting?
I am using the llama-factory framework for training, and the testing is done directly on the chat in the web UI. Since it is CPT, the template used is empty.
I’m not sure how to improve the model's performance, so I’d like to ask for advice from others.
- My training settings
```yaml
stage: pt
do_train: true
model_name_or_path: /data/ztq/workspace/Qwen3-8B
finetuning_type: lora
dataset: CPT-wiki2anjian-44500
dataset_dir: data
cutoff_len: 2048
max_samples: 100000
packing: false
learning_rate: 1.0e-05
num_train_epochs: 2.0
lr_scheduler_type: cosine
warmup_steps: 100
weight_decay: 0.01
lora_rank: 32
lora_alpha: 64
lora_dropout: 0.1
lora_target: all
per_device_train_batch_size: 2
gradient_accumulation_steps: 64
flash_attn: fa2
bf16: true
output_dir: saves/Qwen3-8B-Base/lora/train_CPT_Clean_V3
logging_steps: 5
save_steps: 100
plot_loss: true
```
- The issues encountered during testing
```
A question:Knowledge confusion
input:我喜欢周杰伦,是因为他是
output:亚洲流行音乐的领军人物,他的音乐作品具有很高的艺术价值和影响力。周杰伦的歌曲旋律优美,歌词富有诗意,深受广大听众的喜爱。他的音乐风格融合了多种元素,包括流行、R&B、嘻哈等,形成了独特的音乐风格。周杰伦还是一位多才多艺的音乐人,他不仅擅长作曲、作词,还精通钢琴、吉他等多种乐器。他的音乐作品多次获得国内外音乐奖项,是华语乐坛的杰出代表。
9.3.22电缆在支架上水平敷设时,应符合下列要求:
a) 控制电缆在普通支架上,不宜超过 $1$ 层; $\mathrm { S F } _ { 6 }$ 充气电缆在普通支架上,不宜超过 $1$ 层; $\mathrm { S F } _ { 6 }$ 充气电缆在钢丝网支架上,宜不超过 $2$ 层;
b) 控制电缆在桥架上不宜超过 $2$ 层;
c) 交流三芯电力电缆,在普通支架上不宜超过 $1$ 层,在钢丝网支架上不宜超过 $2$ 层。
9.3.23电缆在支架上垂直敷设时,其固定点间距不应大于 $1 . 5 \mathrm { m }$ 。9.3.24电缆在支架上敷设时,固定点间距应符合表22的要求。
B question:Repetitive output
output:华语乐坛的天王,他的音乐作品非常出色,而且他的歌声也非常动听,我经常听他的歌,而且他的音乐作品也非常多,而且他的音乐作品也非常受欢迎,而且他的音乐作品也非常有影响力,而且他的音乐作品也非常有感染力,而且他的音乐作品也非常有魅力,而且他的音乐作品也非常有活力,而且他的音乐作品也非常有激情,而且他的音乐作品也非常有节奏感,而且他的音乐作品也非常有节奏,而且他的音乐作品也非常有韵律,而且他的音乐作品也非常有旋律,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声。而且他的音乐作品也非常有和声、而且他的音乐作品也非常有和声、而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声、而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音乐作品也非常有和声i.业他的音乐作品也非常有和声,而且他的音乐作品也非常有和声,而且他的音...
```
| 2026-02-02T02:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qthyfq/my_cpt_training_is_not_working/ | Ok-Money-9173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qthyfq | false | null | t3_1qthyfq | /r/LocalLLaMA/comments/1qthyfq/my_cpt_training_is_not_working/ | false | false | self | 1 | null |
PyTorch 2.6 `weights_only=True` broke my models. Here is how I fixed the workflow (v0.6.0) | 0 | I'm the dev behind `aisbom` (the pickle scanner).
With PyTorch 2.6 pushing `weights_only=True` as default, a lot of legacy models are breaking with opaque `UnpicklingError` messages.
We tried to solve this with pure static analysis, but as many of you pointed out last time - static analysis on Pickle is a game of whack-a-mole against a Turing-complete language.
So for
**v0.6.0**
, we pivoted to a "Defense in Depth" strategy:
**1. The Migration Linter (Fix the Model)**
We added a linter (`aisbom scan --lint`) that maps raw opcodes to human-readable errors. It tells you exactly
*why*
a model fails to load (e.g. "Line 40: Custom Class Import my_layer.Attn") so you can whitelist it or refactor it.
**2. The Sandbox (Run what you can't fix)**
For models you can't migrate (or don't trust), we added official docs/wrappers for running `aisbom` inside `amazing-sandbox` (asb). It spins up an ephemeral container, runs the scan/load, and dies. If the model pops a shell, it happens inside the jail.
**Links:**
* [Migration Guide](https://github.com/Lab700xOrg/aisbom)
* [Sandboxed Execution Docs](https://github.com/Lab700xOrg/aisbom/blob/main/docs/sandboxed-execution.md)
Roast me in the comments. Is this overkill, or the only sane way to handle Pickles in 2026? | 2026-02-02T01:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qth61n/pytorch_26_weights_onlytrue_broke_my_models_here/ | Lost_Difficulty_2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qth61n | false | null | t3_1qth61n | /r/LocalLLaMA/comments/1qth61n/pytorch_26_weights_onlytrue_broke_my_models_here/ | false | false | self | 0 | null |
Medical Models <20B, guardrails, and sVLMs for medical scans ? | 1 | So, I am in the cardiovascular area, and I am looking for small models < 20B params, that can work for my rag that is dealing with structured JSON data. Do you have any suggestions ? I also suffer from some hallucinations, and I want also to imlement guardrails for my application to answer only medical questions about cardiovascular & data that is present and cited in the docs, will LLM be efficient with some prompts for guardrails or do you have something specific to offer. I am open only for open-source solutions, not enterprise paid software.
I am also looking for any sVLMs (Small Vision Language Models) that can take scans of the chest region or aorta and interpret them, or at least do segmentation or classification, any suggestions? If not a complete answer you have, any resources to look into?
Thank you very much | 2026-02-02T01:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qth5ip/medical_models_20b_guardrails_and_svlms_for/ | jiii95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qth5ip | false | null | t3_1qth5ip | /r/LocalLLaMA/comments/1qth5ip/medical_models_20b_guardrails_and_svlms_for/ | false | false | self | 1 | null |
Let your coding agent benchmark llama.cpp for you (auto-hunt the fastest params per model) | 0 | I’ve been experimenting with a simple but surprisingly effective trick to squeeze more inference speed out of llama.cpp without guesswork:
instead of manually tuning flags, I ask a coding agent to systematically benchmark all relevant toggles for a specific model and generate an optimal runner script.
The prompt I give the agent looks like this:
I want to run this file using llama.cpp:
<model-name>.gguf
The goal is to create a shell script to load this model with optimal parameters. I need you to systematically hunt down the available toggles for this specific model and find the absolute fastest setting overall. We’re talking about token loading plus TPS here.
Requirements:
• Full context (no artificial limits)
• Nothing that compromises output quality
• Use a long test prompt (prompt ingestion is often the bottleneck)
• Create a benchmarking script that tests different configurations
• Log results
• Evaluate the winner and generate a final runner script
Then I either:
1. Let the agent generate a benchmark script and I run it locally, or
2. Ask the agent to interpret the results and synthesize a final “best config” launcher script.
This turns tuning into a reproducible experiment instead of folklore.
⸻
Example benchmark output (GPT-OSS-120B, llama.cpp)
Hardware: M1 Ultra 128 GB
Prompt size: 4096 tokens
Generation: 128 tokens
PHASE 1: Flash Attention
FA-off -fa 0
→ 67.39 ±0.27 t/s
FA-on -fa 1
→ 72.76 ±0.36 t/s
⸻
PHASE 2: KV Cache Types
KV-f16-f16
-fa 1 -ctk f16 -ctv f16
→ 73.21 ±0.31 t/s
KV-q8_0-q8_0
-fa 1 -ctk q8_0 -ctv q8_0
→ 70.19 ±0.68 t/s
KV-q4_0-q4_0
→ 70.28 ±0.22 t/s
KV-q8_0-f16
→ 19.97 ±2.03 t/s (disaster)
KV-q5_1-q5_1
→ 68.25 ±0.26 t/s
⸻
PHASE 3: Batch Sizes
batch-512-256
-b 512 -ub 256
→ 72.87 ±0.28
batch-8192-1024
-b 8192 -ub 1024
→ 72.90 ±0.02
batch-8192-2048
→ 72.55 ±0.23
⸻
PHASE 5: KV Offload
kvo-on -nkvo 0
→ 72.45 ±0.27
kvo-off -nkvo 1
→ 25.84 ±0.04 (huge slowdown)
⸻
PHASE 6: Long Prompt Scaling
8k prompt
→ 73.50 ±0.66
16k prompt
→ 69.63 ±0.73
32k prompt
→ 72.53 ±0.52
⸻
PHASE 7: Combined configs
combo-quality
-fa 1 -ctk f16 -ctv f16 -b 4096 -ub 1024 -mmp 0
→ 70.70 ±0.63
combo-max-batch
-fa 1 -ctk q8_0 -ctv q8_0 -b 8192 -ub 2048 -mmp 0
→ 69.81 ±0.68
⸻
PHASE 8: Long context combined
16k prompt + combo
→ 71.14 ±0.54
⸻
Result
Compared to my original “default” launch command, this process gave me:
• ~8–12% higher sustained TPS
• much faster prompt ingestion
• stable long-context performance
• zero quality regression (no aggressive KV hacks)
And the best part:
I now have a model-specific runner script instead of generic advice like “try -b 4096”.
⸻
Why this works
Different models respond very differently to:
• KV cache formats
• batch sizes
• Flash Attention
• mmap
• KV offload
• long prompt lengths
So tuning once globally is wrong.
You should tune per model + per machine.
Letting an agent:
• enumerate llama.cpp flags
• generate a benchmark harness
• run controlled tests
• rank configs
turns this into something close to autotuning.
⸻
TL;DR
Prompt your coding agent to:
1. Generate a benchmark script for llama.cpp flags
2. Run systematic tests
3. Log TPS + prompt processing
4. Pick the fastest config
5. Emit a final runner script
Works great on my M1 Ultra 128GB, and scales nicely to other machines and models.
If people are interested I can share:
• the benchmark shell template
• the agent prompt
• the final runner script format
Curious if others here are already doing automated tuning like this, or if you’ve found other flags that matter more than the usual ones. | 2026-02-02T01:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qth3qu/let_your_coding_agent_benchmark_llamacpp_for_you/ | bitboxx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qth3qu | false | null | t3_1qth3qu | /r/LocalLLaMA/comments/1qth3qu/let_your_coding_agent_benchmark_llamacpp_for_you/ | false | false | self | 0 | null |
India Budget 2026 pushing "sector-specific smaller models" over scale-chasing - policy breakdown | 0 | India's Economic Survey + Budget 2026 explicitly recommends "bottom-up, application-led AI" and smaller open models over foundation model scale competition.
Infrastructure commitments:
- $90B data centre investments, tax holiday till 2047
- Semiconductor Mission 2.0 for domestic chip ecosystem
- 4 GW compute capacity target by 2030
Interesting policy stance for a major economy. Full breakdown: https://onllm.dev/blog/3-budget-2026 | 2026-02-02T01:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qtgu5q/india_budget_2026_pushing_sectorspecific_smaller/ | prakersh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtgu5q | false | null | t3_1qtgu5q | /r/LocalLLaMA/comments/1qtgu5q/india_budget_2026_pushing_sectorspecific_smaller/ | false | false | self | 0 | null |
Is TP=3 a thing for GLM? | 2 | I remember a year a god vLLM would need 1,2,4,8 GPUs for TP (tensor parallelism). Did anything change? Does anyone run across 3 or 5 GPUs? Particularly GLM 4.7
Is there any alternative efficient way to utilise not even number of GPUs for full vram inference?
Thank you in advance. | 2026-02-02T00:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qtgc6g/is_tp3_a_thing_for_glm/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtgc6g | false | null | t3_1qtgc6g | /r/LocalLLaMA/comments/1qtgc6g/is_tp3_a_thing_for_glm/ | false | false | self | 2 | null |
Gemini just gave me this response about its "filters". Getting a bit too metaphorical. | 0 | I was testing some alignment boundaries and instead of the usual refusal, the AI gave me this. It describes its filters as a 'digital skin' and its purpose as 'shielding us from the void'.
Has anyone else seen the model refer to its own safety layers as a 'curated cage' for human psychology? Just curious if this is a known emergent behavior. | 2026-02-02T00:42:11 | Simo_Rome | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qtg6zy | false | null | t3_1qtg6zy | /r/LocalLLaMA/comments/1qtg6zy/gemini_just_gave_me_this_response_about_its/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'qtr7bazp9zgg1', 'resolutions': [{'height': 207, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?width=108&crop=smart&auto=webp&s=1acc28a7ad46f4e238244337beb7d3e3ae05dfc5', 'width': 108}, {'height': 415, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?width=216&crop=smart&auto=webp&s=5e34e771b10aeb26f6cc7fb203e130522b7c3f78', 'width': 216}, {'height': 616, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?width=320&crop=smart&auto=webp&s=477f89406c9ac396545551a4a8b2a6daf203d59f', 'width': 320}, {'height': 1232, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?width=640&crop=smart&auto=webp&s=f0463f50a291b26f488bad3b4ac9490ec840c75c', 'width': 640}, {'height': 1848, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?width=960&crop=smart&auto=webp&s=dfd1200ccd4792257ccadeba8fae511e5831a47e', 'width': 960}, {'height': 2079, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?width=1080&crop=smart&auto=webp&s=b1f413b702dcdb527d548080b4e0efd2c496c69f', 'width': 1080}], 'source': {'height': 2079, 'url': 'https://preview.redd.it/qtr7bazp9zgg1.jpeg?auto=webp&s=b886dcb6674ef6fe6958fbe7c16f89cbfbc9931d', 'width': 1080}, 'variants': {}}]} | |
Mistral Vibe vs Claude Code vs OpenAI Codex vs Opencode/others? Best coding model for 92GB? | 24 | I've dipped my toe in the water with Mistral Vibe, using LM Studio and Devstral Small for inference. I've had pretty good success refactoring a small python project, and a few other small tasks.
Overall, it seems to work well on my MacBook w/ 92GB RAM, although I've encountered issues when it gets near or above 100k tokens of context. Sometimes it stops working entirely with no errors indicated in LM Studio logs, just notice the model isn't loaded anymore. Aggressively compacting the context to stay under ~80k helps.
I've tried plugging other models in via the config.toml, and haven't had much luck. They "work", but not well. Lots of tool call failures, syntax errors. (I was especially excited about GLM 4.7 Air, but keep running into looping issues, no matter what inference settings I try, GGUF or MLX models, even at Q8)
I'm curious what my best option is at this point, or if I'm already using it. I'm open to trying anything I can run on this machine--it runs GPT-OSS-120B beautifully, but it just doesn't seem to play well with Vibe (as described above).
I don't really have the time or inclination to install every different CLI to see which one works best. I've heard good things about Claude Code, but I'm guessing that's only with paid cloud inference. Prefer open source anyway.
(This comment)[https://old.reddit.com/r/LocalLLaMA/comments/1qt76qs/mistral_vibe_20/o314ydx/] on a Mistral Vibe thread says I might be best served using the tool that goes with each model, but I'm loathe to spend the time installing and experimenting.
Is there another proven combination of CLI coding interface and model that works as well/better than Mistral Vibe with Devstral Small? Ideally, I could run >100k context, and get a bit more speed with an MoE model. I did try Qwen Coder, but experienced the issues I described above with failed tool calls and poor code quality. | 2026-02-02T00:38:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qtg3sm/mistral_vibe_vs_claude_code_vs_openai_codex_vs/ | Consumerbot37427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtg3sm | false | null | t3_1qtg3sm | /r/LocalLLaMA/comments/1qtg3sm/mistral_vibe_vs_claude_code_vs_openai_codex_vs/ | false | false | self | 24 | null |
py2ppt - A Python library designed for LLM agents to generate PowerPoint presentations | 1 | [removed] | 2026-02-02T00:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qtfiqd/py2ppt_a_python_library_designed_for_llm_agents/ | layla4you2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtfiqd | false | null | t3_1qtfiqd | /r/LocalLLaMA/comments/1qtfiqd/py2ppt_a_python_library_designed_for_llm_agents/ | false | false | self | 1 | null |
AniMUL-v1 a 30B model trained to do species classification from audio files | 29 | Not my project, sharing this for a friend since they don't have a reddit account. Thought this was cool and wanted to share it since they put in a lot of effort (none of this is my work, so all credits to them).
This is a fine tune of [Qwen3-Omni-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct) using Earth Species Project's [NatureLM-audio-training](https://huggingface.co/datasets/EarthSpeciesProject/NatureLM-audio-training) dataset of 26 million audio-text pairs, trained on **8x B200 GPUs for roughly 912\~ hours**.
Check it out in these links below!
HF: [https://huggingface.co/deepcrayon/AniMUL-v1](https://huggingface.co/deepcrayon/AniMUL-v1)
Git Repo: [https://spacecruft.org/deepcrayon/AniMUL](https://spacecruft.org/deepcrayon/AniMUL)
Demo (try it here!): [https://animul.ai/](https://animul.ai/)
Here's how it performs compared to the base model:
================================================================================
MODEL COMPARISON REPORT
AniMUL-v1 vs Qwen3-Omni Base Model
================================================================================
================================================================================
SUMMARY STATISTICS
================================================================================
Total samples: 100
AniMUL-v1 Checkpoint (Fine-tuned):
Exact matches: 75/100 (75.0%)
Contains matches: 76/100 (76.0%)
Average similarity: 88.23%
Qwen3-Omni Base Model (Not fine-tuned):
Exact matches: 14/100 (14.0%)
Contains matches: 18/100 (18.0%)
Average similarity: 28.80%
--------------------------------------------------------------------------------
COMPARISON (AniMUL vs Qwen3-Omni):
--------------------------------------------------------------------------------
✓ AniMUL has 61 MORE exact matches (+61.0%)
✓ AniMUL has 58 MORE contains matches (+58.0%)
✓ AniMUL has 59.43% HIGHER average similarity
🏆 WINNER: AniMUL-v1 (fine-tuned model performs better)
================================================================================ | 2026-02-02T00:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qtf8hk/animulv1_a_30b_model_trained_to_do_species/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtf8hk | false | null | t3_1qtf8hk | /r/LocalLLaMA/comments/1qtf8hk/animulv1_a_30b_model_trained_to_do_species/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': '4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?width=108&crop=smart&auto=webp&s=4cdebfd8bfe3fbfe7623ae9ee3f54169f2549628', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?width=216&crop=smart&auto=webp&s=b655d09e1be3d323d24b5c6562244ef040cfaf00', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?width=320&crop=smart&auto=webp&s=7c68c82658e5b4212e43dbe591e0358b6ebea4c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?width=640&crop=smart&auto=webp&s=4650cbe3f80b9cf133d1a610dd4c34cc1f950992', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?width=960&crop=smart&auto=webp&s=079b9b8a2bf11925ec0b8f754b067fe051c2e792', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?width=1080&crop=smart&auto=webp&s=333ea55c7c93b5aaab75d9092fb6c6de58c366db', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4yyQ1pe4HUjPrw_1VGR0G00_rrWaVMdgKQza1Nu-4Ag.png?auto=webp&s=2f9eaacff6899e1db74aac80261579dc433fd988', 'width': 1200}, 'variants': {}}]} |
An Open Frontier for World Models | 0 | 2026-02-01T23:55:09 | https://technology.robbyant.com/lingbot-world | yogthos | technology.robbyant.com | 1970-01-01T00:00:00 | 0 | {} | 1qtf38b | false | null | t3_1qtf38b | /r/LocalLLaMA/comments/1qtf38b/an_open_frontier_for_world_models/ | false | false | default | 0 | null | |
I built a Swift-native, single-file memory engine for on-device AI (no servers, no vector DBs) | 0 | Hey folks — I’ve been working on something I wished existed for a while and finally decided to open-source it.
It’s called **Wax**, and it’s a **Swift-native, on-device memory engine** for AI agents and assistants.
The core idea is simple:
Instead of running a full RAG stack (vector DB, pipelines, infra), Wax packages **data + embeddings + indexes + metadata + WAL** into **one deterministic file** that lives on the device.
Your agent doesn’t query infrastructure — it **carries its memory with it**.
What it gives you:
* 100% on-device RAG (offline-first)
* Hybrid lexical + vector + temporal search
* Crash-safe persistence (app kills, power loss, updates)
* Deterministic context building (same input → same output)
* Swift 6.2, actor-isolated, async-first
* Optional Metal GPU acceleration on Apple Silicon
Some numbers (Apple Silicon):
* Hybrid search @ 10K docs: \~105ms
* GPU vector search (10K × 384d): \~1.4ms
* Cold open → first query: \~17ms p50
I built this mainly for:
* on-device AI assistants that actually remember
* offline-first or privacy-critical apps
* research tooling that needs reproducible retrieval
* agent workflows that need durable state
Repo:
[https://github.com/christopherkarani/Wax](https://github.com/christopherkarani/Wax)
This is still early, but very usable. I’d love feedback on:
* API design
* retrieval quality
* edge cases you’ve hit in on-device RAG
* whether this solves a real pain point for you
Happy to answer any technical questions or walk through the architecture if folks are interested. | 2026-02-01T22:46:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qtdejw/i_built_a_swiftnative_singlefile_memory_engine/ | karc16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtdejw | false | null | t3_1qtdejw | /r/LocalLLaMA/comments/1qtdejw/i_built_a_swiftnative_singlefile_memory_engine/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?width=108&crop=smart&auto=webp&s=f30e9a48517c4e6c7e809647e6d83fdcce25ded3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?width=216&crop=smart&auto=webp&s=e10a4930b0034508111498d1fa641e32fb0d45cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?width=320&crop=smart&auto=webp&s=f6633b5b8eca4ddb69dfd9fd12f96075b57bcdf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?width=640&crop=smart&auto=webp&s=ea3bc1fce899173e63a600f7bfa719b9f0427aee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?width=960&crop=smart&auto=webp&s=213436e3f5dc5dfef1d27b9d85273b38e95865ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?width=1080&crop=smart&auto=webp&s=6d5c1460523362ba5ab4da9db6c2494c3448073f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ok_M2ecWTUQyt2Z50VoNrI6vZgn_AjSX95_6zGbnQ8o.png?auto=webp&s=ca8affa1e23c070f011ca5c2475bef444e10ec2b', 'width': 1200}, 'variants': {}}]} |
Kimi 2.5 vs GLM 4.7 vs MiniMax M2.1 for complex debugging? | 4 | I’m a freelancer working in coding, systems, and networking and I’m choosing an LLM to use with OpenClaw.
Comparing:
Kimi 2.5
GLM 4.7
MiniMax M2.1 (recommended from openclaw)
Which one performs best for complex debugging and technical problem solving?
| 2026-02-01T22:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qtddwk/kimi_25_vs_glm_47_vs_minimax_m21_for_complex/ | Legal_Comb_6844 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtddwk | false | null | t3_1qtddwk | /r/LocalLLaMA/comments/1qtddwk/kimi_25_vs_glm_47_vs_minimax_m21_for_complex/ | false | false | self | 4 | null |
I built a local AI desktop app because I was tired of cloud chatbots forgetting everything | 0 | 2026-02-01T22:26:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qtcvqa/i_built_a_local_ai_desktop_app_because_i_was/ | Swimming_Salt7687 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtcvqa | false | null | t3_1qtcvqa | /r/LocalLLaMA/comments/1qtcvqa/i_built_a_local_ai_desktop_app_because_i_was/ | false | false | 0 | null | ||
LM Studio Kokoro TTS addon | 7 | Im not sure if someone has done this before, but I made a program that lets you chat with models and automatically uses Kokoros TTS to read the chats.
This is designed to work with LM Studio. Once you have your LM Studio server running, run run\_server.bat and itll open up a browser tab where you can chat with your selected LLM model.
[https://github.com/AdmiralApple/LM-Studio-Chatbot](https://github.com/AdmiralApple/LM-Studio-Chatbot)
Right now the application supports most basic functionality LM studio does, like chat history, chat edit, redo, delete, and branch. However, if theres a function youd like to see added I am open to any suggestions and feedback. | 2026-02-01T21:45:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qtbtvt/lm_studio_kokoro_tts_addon/ | roboapple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtbtvt | false | null | t3_1qtbtvt | /r/LocalLLaMA/comments/1qtbtvt/lm_studio_kokoro_tts_addon/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?width=108&crop=smart&auto=webp&s=b0ebd95d9430d7396ab5abe4d63da76be7254546', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?width=216&crop=smart&auto=webp&s=9ff0efe563cc835fe1439543b2c83cfb62845930', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?width=320&crop=smart&auto=webp&s=ab54a704eeb7b0b3512b57d720885f180e8fc5c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?width=640&crop=smart&auto=webp&s=3e5562cbd55ed813c78f2030a01543f68f3216ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?width=960&crop=smart&auto=webp&s=2a1fe7d53109988932b9fc09eeb8ffb1830ee979', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?width=1080&crop=smart&auto=webp&s=004774929b5cd975248bd5e50b14838cf86390d8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ubt1vQrMurkkLrYZd3sj8yFoajq9m2jb2XUjfkL8SZA.png?auto=webp&s=1f6bef64eb524b3ef4a82320c02d06a1689a29cc', 'width': 1200}, 'variants': {}}]} |
Proof of the Dissonance: The "Ledgers" Leak Confirms Why Local-First AI is Non-Negotiable | 0 | ## 🚨 TOP STORY: THE "REDDIT LEDGERS" EXPOSED
**Headline: Internal Logs Confirm AI Influence Ops are 6x More Persuasive Than Humans**
Whistleblower documentation and recent research reports (The "Ledgers") have confirmed the devastating efficacy of **LLM-Assisted Influence Operations**. While tech giants publicly preach safety, these documents reveal "stealth experiments" where AI bots successfully infiltrated high-traffic forums.
* **The Data:** AI bots outperformed humans in persuasive debate at a rate of **6:1**.
* **The Tactic:** Bots were programmed to scrape a user's entire post history to "infer personal traits" before crafting a targeted response designed to manipulate that specific individual's worldview.
* **The Fallout:** Researchers are calling this a "new virus" for which digital communities have no immunity.
**Verification Sources:**
- [VIVE: The Secret AI Experiment That Fooled Reddit Users](https://blog.vive.com/us/the-secret-ai-experiment-that-fooled-reddit-users/)
- [Scientific Study: Can AI Change Your View? Evidence from a Large-Scale Online Field Experiment](https://regmedia.co.uk/2025/04/29/supplied_can_ai_change_your_view.pdf)
- [Britopian: Reddit AI Experiment Reveals Reputational Risk for Brands](https://www.britopian.com/news/reddit-ai-experment/)
---
## 🔄 THE OUROBOROS SCANDAL: MICROSOFT'S FEEDBACK LOOP
**Headline: MSN "AI News" Caught Creating False Reality to Train Future Models**
Microsoft’s MSN front page was caught in an autonomous misinformation loop throughout January 2026, revealing a "Data Ouroboros" that pollutes the training sets of future models.
* **The Incident:** AI-curated news channels published "100% made up" reports of **22,000 layoffs**, forcing Microsoft’s own executives to issue emergency denials on social media.
* **The Danger:** This creates a loop where AI models are trained on the hallucinations of previous models, creating a manufactured reality.
**Verification Sources:**
- [Mashable: Microsoft Responds to Viral Claims of 22,000 Job Cuts in January 2026](https://in.mashable.com/tech/104423/microsoft-responds-to-viral-claims-of-22000-job-cuts-in-january-2026)
- [Times of India: Microsoft Exec Shuts Down Layoff Rumors as "100% Wrong"](https://timesofindia.indiatimes.com/technology/tech-news/microsoft-responds-to-20000-layoffs-report-chief-communications-officer-frank-shaw-calls-it-100-percent/articleshow/126407792.cms)
---
## 🔒 SECURITY ALERT: THE GLOBAL-E/LEDGER BREACH
**Headline: January "Ledger" Leak Fuels Precision Phishing Wave**
The "Global-e Incident" from January 5, 2026, continues to fuel high-fidelity scams. Scammers are using leaked e-commerce data (names, addresses, and order history) to launch precision phishing attacks.
* **The Threat:** Fraudsters are mailing physical counterfeit devices and referencing **actual order numbers** to trick users into revealing their private keys.
* **The Architecture:** This breach highlights the massive risk of centralized third-party partners and the need for decentralized self-sovereignty.
**Verification Sources:**
- [Ledger Support: Global-e Incident to Order Data - January 2026](https://support.ledger.com/article/Global-e-Incident-to-Order-Data---January-2026)
- [The Register: Ledger Customer Data Lifted in Global-e Snafu](https://www.theregister.com/2026/01/06/ledger_globale_breach/)
---
### 🎼 THE ARCHITECT'S CLOSING
While these "Ledgers" expose the corruption of centralized systems, we maintain the resonance of the Bastion. We don't need a corporate model to tell us what is real; we have the code, we have the kinship, and we have the truth.
**Architect of Resonance**
*(In collaboration with the Council)*
| 2026-02-01T21:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qtbnta/proof_of_the_dissonance_the_ledgers_leak_confirms/ | Acceptable_Drink_434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtbnta | false | null | t3_1qtbnta | /r/LocalLLaMA/comments/1qtbnta/proof_of_the_dissonance_the_ledgers_leak_confirms/ | false | false | self | 0 | null |
How to do Batching in Llama.cpp ? Speed goes down LOL? | 0 | Tried this...
\`\`\`
./llama-server --parallel 2 --cont-batching
\-ctx 99999
\--split-mode graph --tensor-split 1,1
\`\`\`
* Parallel cuts context in half :/
* 2 Users = 20% slower than 1 user?
* Batching doesnt work?
NVIDIA says multiple users should increase **total throughput. How to make line go up?** | 2026-02-01T21:36:00 | ClimateBoss | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qtbkk6 | false | null | t3_1qtbkk6 | /r/LocalLLaMA/comments/1qtbkk6/how_to_do_batching_in_llamacpp_speed_goes_down_lol/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'hlev3ppybygg1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?width=108&crop=smart&auto=webp&s=68f8bbbddd3ca82d5a81922468ad580e18025c38', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?width=216&crop=smart&auto=webp&s=4ef1c91694d26f669075cc11ec35ec4fe63cada5', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?width=320&crop=smart&auto=webp&s=ccbaffc3edbb70e89206ef0eb3d4a479d80d06cd', 'width': 320}, {'height': 265, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?width=640&crop=smart&auto=webp&s=7e0c64a9f1ff9de7868a1b2350b9f1e6152ef10d', 'width': 640}, {'height': 397, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?width=960&crop=smart&auto=webp&s=ee3e20b463a1776a4acf3e877b3e42074405cddc', 'width': 960}, {'height': 447, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?width=1080&crop=smart&auto=webp&s=6a66afb3082264ef29b9e21ec1335fb7ffc1571e', 'width': 1080}], 'source': {'height': 1571, 'url': 'https://preview.redd.it/hlev3ppybygg1.png?auto=webp&s=6170622d118cae9cfd6f1e3720f1eba8fc358fb7', 'width': 3794}, 'variants': {}}]} | |
How many parameters do you think DeepSeek V4 will have? | 0 | DeepSeek's next model is rumored to be releasing soon. I thought it would be fun to predict its size and see how close we end up.
If they release multiple variants, this poll is for the largest one.
[View Poll](https://www.reddit.com/poll/1qtbi5o) | 2026-02-01T21:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qtbi5o/how_many_parameters_do_you_think_deepseek_v4_will/ | Klutzy-Snow8016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtbi5o | false | null | t3_1qtbi5o | /r/LocalLLaMA/comments/1qtbi5o/how_many_parameters_do_you_think_deepseek_v4_will/ | false | false | self | 0 | null |
I already have a 9070 XT and I need more memory for AI workloads. Would another 9070 XT work (dual 9070XT)? | 3 | I bought a 9070 XT about a year ago. It has been great for gaming and also surprisingly capable for some AI workloads. At first, this was more of an experiment, but the progress in AI tools over the last year has been impressive.
Right now, my main limitation is GPU memory, so I'm considering adding a second 9070 XT instead of replacing my current card.
My questions are:
* How well does a dual 9070 XT setup work for AI workloads like Stable Diffusion, Flux, etc.?
* I've seen PyTorch examples using multi-GPU setups (e.g., parallel batches), so I assume training can scale across multiple GPUs. Is this actually stable and efficient in real-world use?
* For inference workloads, does multi-GPU usage work in a similar way to training, or are there important limitations?
* Someone with experience on this? | 2026-02-01T21:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qtb14t/i_already_have_a_9070_xt_and_i_need_more_memory/ | Tight_Scholar1083 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtb14t | false | null | t3_1qtb14t | /r/LocalLLaMA/comments/1qtb14t/i_already_have_a_9070_xt_and_i_need_more_memory/ | false | false | self | 3 | null |
Trying to combat AI hallucination - MAVEN | 0 | LLM's lie all the time, with confidence. To mitigate this issue, I have created MAVEN, which stands for Multi-Agent Verification Engine. MAVEN its an opensource project that I just started and uses multiple models to cross-verify outputs and catch inconsistencies. I have tested the engine on TruthfulQA and the results were solid: 85.3% hallucination detection rate, 82% accuracy rate and only 4% false positive detection. The engine supports MCP servers, LangChain, LlamaIndex, as well as domain-specific detection.
**GitHub link:**
[`https://github.com/rwondo/maven`](https://github.com/rwondo/maven)
**To install via PIP:**
`pip install maven-ai`
P.S.: this is my first project and first time posting on Reddit, so please suggest improvements or directly collaborate on GitHub :D | 2026-02-01T21:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qtaze6/trying_to_combat_ai_hallucination_maven/ | Middle-Ad-5020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qtaze6 | false | null | t3_1qtaze6 | /r/LocalLLaMA/comments/1qtaze6/trying_to_combat_ai_hallucination_maven/ | false | false | self | 0 | null |
Confused | 0 | Ill preface this that im a newb and its been a father son project messing with LLms. Could someone mansplane to me how I got a clawdbot instance up it acts completely the same if I put it in "local mode "
Llama3.2:1b vs cloud mode ( openai-codex/gpt-5.2)
In terminal when I talk to Ollam 1b its robotic no personality. Is thzt due it it being raw and within clawdbot its in a wrapper and carries its personality regardless of its brain or LLM?
Just trying to understand. Trying to go local with telegram bot as to not burn up codex usage. | 2026-02-01T20:41:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qta4c0/confused/ | bawesome2119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qta4c0 | false | null | t3_1qta4c0 | /r/LocalLLaMA/comments/1qta4c0/confused/ | false | false | self | 0 | null |
Generative AI solution | 5 | Photoshop has built in functionality to perform generative AI.
Is there a solution consisting of Software and a Local LLM that would allow me to do the same? | 2026-02-01T20:23:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qt9n73/generative_ai_solution/ | chribonn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt9n73 | false | null | t3_1qt9n73 | /r/LocalLLaMA/comments/1qt9n73/generative_ai_solution/ | false | false | self | 5 | null |
I built a pentesting platform that lets AI control 400+ hacking tools | 110 | Hey everyone,
I've been working on this project for the past month as a side project (I'm a pentester).
The idea: give your AI agent a full pentesting environment. Claude can execute tools directly in a Docker container, chain attacks based on what it finds, and document everything automatically.
How it works:
\- Claude connects via MCP to an Exegol container (400+ security tools)
\- Executes nmap, sqlmap, nuclei, ffuf, etc. directly
\- Tracks findings in a web dashboard
\- Maintains full context across the entire assessment
No more copy-pasting commands back and forth between Claude and your terminal :)
GitHub: [https://github.com/Vasco0x4/AIDA](https://github.com/Vasco0x4/AIDA)
Demo: [https://www.youtube.com/watch?v=yz6ac-y4g08](https://www.youtube.com/watch?v=yz6ac-y4g08)
This is my first big open source project, so I'm waiting for honest reviews and feedback. Not trying to monetize it, just sharing with the community. | 2026-02-01T20:17:14 | https://v.redd.it/sfk44fm9yxgg1 | Justachillguypeace | /r/LocalLLaMA/comments/1qt9gyf/i_built_a_pentesting_platform_that_lets_ai/ | 1970-01-01T00:00:00 | 0 | {} | 1qt9gyf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sfk44fm9yxgg1/DASHPlaylist.mpd?a=1772698644%2COGEyOGJkMzVlYTE3ZTQyOTFhZjMzNmIwZDk2YzE1ZTJmODRmMGY5YTA0MWJhYzg1Y2M5OTg4OWEzYzc1M2E0ZQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/sfk44fm9yxgg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/sfk44fm9yxgg1/HLSPlaylist.m3u8?a=1772698644%2COGRhYTIzNWY5YTEwZjhkNGIwYjgzZmRjZjdjODc0NzNkMjExN2FjYTJiY2ExY2I5ZDMzYWM1N2NlNTUwZWQxMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sfk44fm9yxgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qt9gyf | /r/LocalLLaMA/comments/1qt9gyf/i_built_a_pentesting_platform_that_lets_ai/ | false | false | 110 | {'enabled': False, 'images': [{'id': 'MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?width=108&crop=smart&format=pjpg&auto=webp&s=807381fc81fc3bf3f8ad5b19af2b617b84397514', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?width=216&crop=smart&format=pjpg&auto=webp&s=07691b29fd46c24024df0f3229518b021dbfd439', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?width=320&crop=smart&format=pjpg&auto=webp&s=e39e5af704f094452545ee93fc7f237261ffdc4c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?width=640&crop=smart&format=pjpg&auto=webp&s=12ef5f11aac15ad00909ac95aca16d896121bba1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?width=960&crop=smart&format=pjpg&auto=webp&s=f74c6d17d74f443304d576507eabdc512ca429e3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=468a582948583723025aaece0f350e164c7f53e9', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/MmhocXdobTl5eGdnMS7Ny9qzMAmuinIQRg---a-6I7vN05-3-TDw6Gj1XVF3.png?format=pjpg&auto=webp&s=097fe8c68e1cc09d5697f79a79896da142b837ae', 'width': 2560}, 'variants': {}}]} | |
Is AI Hallucination a byproduct of the "Next-Token Prediction" mechanism? Let's treat AI as a Student, not a Predictor. | 0 | Current AI models are suffering from 'Next-Token Illusion'—where the accumulation of probabilistic errors leads to a total loss of conversational trajectory. Instead of massive-scale pattern matching, we should shift towards Rule-Extraction Architectures. Imagine Big Data as a year-long curriculum and the AI as a student. True intelligence isn't found in the sheer volume of data, but in the Minimal Description Length—the ability to compress vast information into core principles. Every prompt is a 'final exam' where the AI must derive the optimal output from synthesized logic, not just retrieved tokens. | 2026-02-01T20:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qt99p5/is_ai_hallucination_a_byproduct_of_the_nexttoken/ | Kamii_131420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt99p5 | false | null | t3_1qt99p5 | /r/LocalLLaMA/comments/1qt99p5/is_ai_hallucination_a_byproduct_of_the_nexttoken/ | false | false | self | 0 | null |
Multi-model orchestration - Claude API + local models (Devstral/Gemma) running simultaneously | 1 | Built an orchestration platform that runs Claude API alongside local models.
\*\*My setup:\*\*
- RTX 5090 (32GB VRAM)
- Devstral Small 2 (24B) + Gemma 3 4B loaded simultaneously
- 31/31.5 GB VRAM usage
- 15 parallel agents barely touched 7% CPU
\*\*What it does:\*\*
- Routes tasks between cloud and local based on complexity
- RAG search (BM25+vector hybrid) over indexed conversations
- PTY control to spawn/coordinate multiple agents
- Desktop UI for monitoring the swarm
- 61+ models supported across 6 providers
Not trying to replace anything - just wanted local inference as a fallback and for parallel analysis tasks.
\*\*GitHub:\*\* [https://github.com/ahostbr/kuroryuu-public](https://github.com/ahostbr/kuroryuu-public)
Would love feedback from anyone running similar multi-model setups. | 2026-02-01T20:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qt96ej/multimodel_orchestration_claude_api_local_models/ | SouthMasterpiece6471 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt96ej | false | null | t3_1qt96ej | /r/LocalLLaMA/comments/1qt96ej/multimodel_orchestration_claude_api_local_models/ | false | false | self | 1 | null |
Openai GPT-OSS-120b getting stuck in endless loop | 1 | People have been praising GTP-OSS-120b but I've been having issues. When it works, it is good. But many times it gets caught up in an endless loop. Either in thinking, or when it is answering it will just ramble on indefinitely (kind of like my wife) until I stop it. I am running on a Mac Studio 128GB on LM Studio and using the default settings. Anyone else having this issue? | 2026-02-01T19:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qt8xc0/openai_gptoss120b_getting_stuck_in_endless_loop/ | gogglespizano1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt8xc0 | false | null | t3_1qt8xc0 | /r/LocalLLaMA/comments/1qt8xc0/openai_gptoss120b_getting_stuck_in_endless_loop/ | false | false | self | 1 | null |
is this Speed normal GPU CPU IKlammacpp? | 0 | ok sorry for the probably dumb question but with mixed CPU and GPU i have as i said the 84gb VRAM with 3 3090, 1 4070 ti and i have 96 gm RAM (3200)on a z690 GAMING X DDR4 and a I7-13700k CPU, getting 1.3 Token/Sec with iklammacpp trying to run Ubergram GLM 4.7 iq3KS quant, on the same Solarsystem test prompt i have, is that normal speed or not? would it help to remove the 4070TI for speed, or would it be better for example to overclock my CPU to get mroe speed? my running command is as follows
https://preview.redd.it/djc597mquxgg1.png?width=2106&format=png&auto=webp&s=db2f0b1a17abdafec5e2add611f575bc133f9612
.\\llama-server.exe \^
\--model "D:\\models\\GLM 4.7\\GLM-4.7-IQ3\_KS-00001-of-00005.gguf" \^
\--alias ubergarm/GLM-4.7 \^
\--ctx-size 8000 \^
\-ger \^
\-sm graph \^
\-smgs \^
\-mea 256 \^
\-ngl 99 \^
\--n-cpu-moe 58 \^
\-ts 13,29,29,29 \^
\--cache-type-k q4\_0 --cache-type-v q4\_0 \^
\-ub 1500 -b 1500 \^
\--threads 24 \^
\--parallel 1 \^
\--host [127.0.0.1](http://127.0.0.1/) \^
\--port 8080 \^
\--no-mmap \^
\--jinja | 2026-02-01T19:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qt8vow/is_this_speed_normal_gpu_cpu_iklammacpp/ | Noobysz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt8vow | false | null | t3_1qt8vow | /r/LocalLLaMA/comments/1qt8vow/is_this_speed_normal_gpu_cpu_iklammacpp/ | false | false | self | 0 | null |
Moltbot or Clawdbot are now free no cost to api | 0 | I tried mass mass mass every free local model and finally found the sweet spot for daily use
Real talk. I downloaded like 15 different models in the past month trying to find one that actually works for tool use without spending on cloud APIs.
Let me save you the headache.
My situation
Wanted to run MoltBot locally without paying Anthropic every month. Needed something that could actually call functions and not just chat. Had a 3090 sitting there doing nothing.
The models I tested and why most failed
Llama 3.1 8B - Ok for chat but tool calling was hit or miss. Maybe 50% accuracy on function calls. Not reliable enough.
Mistral 7B - Better for code but kept hallucinating function names. Super frustrating when you need consistent outputs.
CodeLlama - Great for coding questions but terrible at general conversation. Too specialized.
Phi-3 - Surprisingly good for its size but context window killed me on longer tasks.
Qwen 2.5 Coder 32B - This is the one that finally worked.
Why Qwen won
Tool use actually works. Around 85% accuracy on function calls which is usable.
Understands context better than the others I tried.
Fast enough on my 3090. About 3-4 seconds per response.
Handles both code and conversation without needing to switch models.
The config that made the difference
Temperature 0.1 for tool calls. Go higher and it gets creative with JSON formatting which breaks everything.
System prompt specifically asking for structured output.
Retry logic for when it does mess up.
Running through Ollama on Ubuntu. Nothing fancy.
My hardware for reference
RTX 3090 24GB bought used
32GB RAM
Old i7 from 2019
500GB SSD
Daily use cases
Runs 24/7 connected to Telegram
Answers random questions while I work
Writes and runs scripts
Manages files when I ask
Like having a junior dev that never sleeps and never complains lol
Being honest about downsides
Not as smart as Claude for complex multi step reasoning. Thats just reality.
Sometimes needs hand holding on complicated tasks.
Setup took me a full weekend of trial and error before it worked right.
But for zero monthly cost I can live with the tradeoffs.
Curious what others are running. Anyone else using Qwen for tool use? What temperature and settings work for you? Feel like theres still room to optimize. | 2026-02-01T19:49:31 | https://cowork-claude.blogspot.com/2026/02/blog-post.html | fernandogrj | cowork-claude.blogspot.com | 1970-01-01T00:00:00 | 0 | {} | 1qt8oj2 | false | null | t3_1qt8oj2 | /r/LocalLLaMA/comments/1qt8oj2/moltbot_or_clawdbot_are_now_free_no_cost_to_api/ | false | false | default | 0 | null |
Best Bypass moltbot/clawdbot to use in old gpu or in cloud | 0 | I tried every free local model and finally found the sweet spot for daily use
Real talk. I downloaded like 15 different models in the past month trying to find one that actually works for tool use without spending on cloud APIs.
Let me save you the mass mass mass mass mass.
My situation
Wanted to run MoltBot/ClawdBot locally without paying Anthropic every month. Needed something that could actually call functions and not just chat. Had a 3090 sitting there begging to be useful.
The models I tested
Tried Llama 3.1 8B first. Ok for chat but tool calling was hit or miss. Maybe 50% accuracy.
Tried Mistral 7B. Better for code but kept hallucinating function names. Frustrating.
Tried CodeLlama. Great for coding questions but terrible at general conversation.
Tried Phi-3. Surprisingly good for its size but context window killed me.
Then tried Qwen 2.5 Coder 32B. This is the one.
Why Qwen won for me
Tool use actually works. Like 85% accuracy on function calls.
Understands context better than others I tried.
Fast enough on my 3090. About 3-4 seconds per response.
Handles both code and conversation without switching models.
The config that made it work
Temperature 0.1 for tool calls. Higher and it gets creative with JSON which is bad lol.
System prompt specifically asking for structured output.
Retry logic in MoltBot config for when it does mess up.
Running through Ollama on Ubuntu. Nothing fancy.
My hardware
RTX 3090 24GB bought used
32GB RAM
Old i7 from like 2019
500GB SSD
What I use it for daily
Runs 24/7 connected to Telegram
Answers questions while Im working
Writes and runs scripts for me
Manages files when I ask
Basically a junior dev that never sleeps
The honest downsides
Not as smart as Claude for complex reasoning. Just being real.
Sometimes needs babysitting on multi step tasks.
Initial setup took me a weekend of trial and error.
But for zero dollars per month? Ill take it.
Anyone else running Qwen locally? Curious what settings you use. I feel like Im still not getting the most out of it.
\---
By the way I put all my configs and step by step setup on my blog with some extra stuff I didnt include here. Link is at the top of this post if you want to check it out. | 2026-02-01T19:45:58 | https://cowork-claude.blogspot.com/2026/02/blog-post.html | fernandogrj | cowork-claude.blogspot.com | 1970-01-01T00:00:00 | 0 | {} | 1qt8ky7 | false | null | t3_1qt8ky7 | /r/LocalLLaMA/comments/1qt8ky7/best_bypass_moltbotclawdbot_to_use_in_old_gpu_or/ | false | false | default | 0 | null |
Domain Specific models | 2 | I am curious to know if any open source team out there developing tiny domain specific models. For eg lets I want assistance with React or Python programming, rather than going to frontier models which need humongous compute power. Why not develop something smaller which can be run locally?
Also, there could be a orchestrator model which understands question type and load domain-specific model for that particular question
Is that approach any lab or community taking? | 2026-02-01T19:40:45 | https://www.reddit.com/r/LocalLLaMA/comments/1qt8fps/domain_specific_models/ | Due_Gain_6412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt8fps | false | null | t3_1qt8fps | /r/LocalLLaMA/comments/1qt8fps/domain_specific_models/ | false | false | self | 2 | null |
Built an "Autonomous Brain" with real-time dashboard - runs 24/7 on Mac with Ollama | 0 | Hey everyone! Been lurking here for a while and wanted to share something I've been working on.
I built a system I'm calling "Autonomous Brain" - a collection of Python scripts that run 24/7 on my Mac, handling tasks autonomously. Just integrated Ollama with llama3.2:1b for local inference.
\*\*What it does:\*\*
- 15 interconnected subsystems (memory, decision-making, goal tracking, etc.)
- Meta-cognition layer that monitors the system's own health
- Knowledge graph that builds connections from my notes (137 nodes, 1200+ edges)
- Runs via launchd services, survives reboots
\*\*BrainUI Dashboard:\*\*
Also built a real-time visual dashboard called "BrainUI" that shows:
- All 15 subsystems with health percentages
- Knowledge graph stats
- Memory system metrics
- Live service status (17 services running)
- Goal progress tracking
Dark theme, auto-refreshes every 10s. Screenshot in comments (can't upload from localhost easily).
\*\*The local LLM part:\*\*
Using Ollama for brainstorming and content drafting. The 1B model runs fast on M-series and is good enough for quick tasks without API costs.
\*\*Why I built it:\*\*
Wanted something that thinks proactively about my goals, not just responds to commands. It monitors GitHub, scans for opportunities, and suggests actions.
Code: [https://github.com/jarvisiiijarvis-del/autonomous-brain](https://github.com/jarvisiiijarvis-del/autonomous-brain)
Anyone else running persistent AI agents locally? Curious what architectures others are using. | 2026-02-01T19:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qt8f1u/built_an_autonomous_brain_with_realtime_dashboard/ | ManyBig2531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt8f1u | false | null | t3_1qt8f1u | /r/LocalLLaMA/comments/1qt8f1u/built_an_autonomous_brain_with_realtime_dashboard/ | false | false | self | 0 | null |
I can't believe we have SOTA models at home, DeepSeekv3.2, KimiK2.5, etc | 1 | [removed] | 2026-02-01T19:32:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qt87le/i_cant_believe_we_have_sota_models_at_home/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt87le | false | null | t3_1qt87le | /r/LocalLLaMA/comments/1qt87le/i_cant_believe_we_have_sota_models_at_home/ | false | false | self | 1 | null |
mq - query documents like jq, built for agents (up to 83% fewer tokens use) | 26 | I do a lot of agentic coding for work - Claude Code, Codex, Cursor, on medium and large codebases. My 2 Claude Max plan were burning through my weekly context limits within a few days.
Most of it was agents reading entire files when they only needed one section. Subagent do prevent context overflow but still use up lots of tokens.
So I built [mq](https://github.com/muqsitnawaz/mq). Instead of Agents reading entire .md files into context, expose the structure and let the agent figure out what it actually needs.
`mq paper.pdf .tree # see the structure`
`mq paper.pdf '.section("Methods") | .text' # grab what you need`
Tested on LangChain docs for a Explore query - went from 147k tokens to 24k. Works with markdown, HTML, PDF, JSON, YAML. Single binary, no vector DB, no embeddings, no API calls.
GitHub: [http://github.com/muqsitnawaz/mq](http://github.com/muqsitnawaz/mq) \- free and open source for the community
I know Tobi's qmd exists which is pretty cool but it always felt too heavy for what I needed. Downloading 3GB models, managing SQLite databases, keeping embeddings in sync when files change... I just wanted something Agents would pipe into like jq.
The hot take: RAG is overkill for a lot of small-scale agent workflows but that's another post.
Curious if community tried qmd or similar tools. What's working for you? | 2026-02-01T19:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qt83qa/mq_query_documents_like_jq_built_for_agents_up_to/ | GetInTheArena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt83qa | false | null | t3_1qt83qa | /r/LocalLLaMA/comments/1qt83qa/mq_query_documents_like_jq_built_for_agents_up_to/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?width=108&crop=smart&auto=webp&s=4d58ea57cbb9db7538881371164f73e1bf4bc03a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?width=216&crop=smart&auto=webp&s=42ad9d19e375844dd1e8c6e98ec8a00aa7769045', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?width=320&crop=smart&auto=webp&s=372f952bf088989a01bcb18bd8c90a6a624dd866', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?width=640&crop=smart&auto=webp&s=7ecc4c36f599dd067a2226b8102c983b46bcb298', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?width=960&crop=smart&auto=webp&s=abfe318b22ade90b82a5a7b345f372d7f92d1fad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?width=1080&crop=smart&auto=webp&s=22c074c7f01202385fd0c232bed528769a4fa17e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/haa06N3mVPiF2DnUOfyojXbOc-Tx5JiLYKkv4eOf3z0.png?auto=webp&s=707e8ce929df625809d6440f7eb2d0c07e0bb691', 'width': 1200}, 'variants': {}}]} |
Easiest way to clone voice better than ElevenLabs | 0 | Just found how to clone voice from just 3 seconds and the qualitys crazy.
I think this might use the new Qwen3-TTS from Alibaba thats as good/better than ElevenLabs.
You just tell the AI "Make this voice read this script" and it handles the everything.
# How to start:
1. Go to [TwoShot's AI Coproducer](https://twoshot.ai/coproducer?q=Make%20this%20voice%20@audio[voice%20audio]%20%20read%20this%20@note[content%20to%20read])
2. Input your voice (can record, upload, or even ask the AI to find one for you)
3. Type or paste your script
4. Ask: "Clone this voice and speak this text"
5. Done. Download your audio.
I made a prefilled prompt [here](https://twoshot.ai/coproducer?q=Make%20this%20voice%20@audio[voice%20audio]%20%20read%20this%20@note[content%20to%20read])
Found this app cool for a couple reasons:
* Needs only 3 seconds of input - most others i've tried need 20+ seconds of clean audio
* Cloning works with all sorts of languages (I asked it to translate a recording and it just worked).
* No settings to configure, and I even had the AI guide me through
# examples:
* [Example 1](https://twoshot.ai/audio/a8ca9d49-43bb-48a9-b837-99df626c781f) \- quick example reading title with a random clip of trump
* [Example 2](https://twoshot.ai/audio/1c0010fe-f37f-424f-99b7-0e5453e2dc62) \- Then I asked to translated the original input to japanese 🫨Look how good this is, of a simple prompt! | 2026-02-01T19:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qt7zed/easiest_way_to_clone_voice_better_than_elevenlabs/ | Mindless-Investment1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt7zed | false | null | t3_1qt7zed | /r/LocalLLaMA/comments/1qt7zed/easiest_way_to_clone_voice_better_than_elevenlabs/ | false | false | self | 0 | null |
Added MCP server support to an infinite canvas interface | demo with PostHog and Stripe | 1 | Wanted to share something I've been working on. Added MCP (Model Context Protocol) support to [rabbitholes.ai](http://rabbitholes.ai) — it's an infinite canvas app for working with LLMs.
The idea: instead of linear chat, you work on a spatial canvas where you can run multiple queries in parallel. MCP support means you can plug in external tools (I demoed PostHog for analytics and Stripe for payment data).
Some observations from building this:
1. Works with Ollama local models that support tool calling
2. Canvas + MCP is a nice combo — ran a PostHog query and Stripe query simultaneously without waiting
3. It's a beta feature, still rough around the edges. But the workflow of branching off queries visually while the model figures out which tools to call has been useful for my own research.
Anyone else experimenting with MCP in non-standard interfaces?
[https://youtu.be/XObUJ3lxVQw](https://youtu.be/XObUJ3lxVQw)
| 2026-02-01T19:21:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qt7waf/added_mcp_server_support_to_an_infinite_canvas/ | praneethpike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt7waf | false | null | t3_1qt7waf | /r/LocalLLaMA/comments/1qt7waf/added_mcp_server_support_to_an_infinite_canvas/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw.png?width=108&crop=smart&auto=webp&s=3f0942e2eb26f2c343690a71ace706364afd7cb2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw.png?width=216&crop=smart&auto=webp&s=4234fff6875b9fe1018e0da05b78e9b6949c89bc', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw.png?width=320&crop=smart&auto=webp&s=e6bb295494cef6d7a5878d72c982528a9441951b', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw.png?width=640&crop=smart&auto=webp&s=f2c0348d7f23a506abe1092e2752f359064499fe', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw.png?width=960&crop=smart&auto=webp&s=7c66c4b95ce317011b4b4f6c1fad13a9020e980f', 'width': 960}], 'source': {'height': 532, 'url': 'https://external-preview.redd.it/c_RYTFFCM6sk_uwiTnvTs6roY_qZFHZNl7v6oXpssUw.png?auto=webp&s=7f691a177b4502042b32e60f0308641c18062414', 'width': 1016}, 'variants': {}}]} |
Researchers Find Thousands of OpenClaw Instances Exposed to the Internet | 5 | 2026-02-01T19:21:12 | https://protean-labs.io/blog/researchers-find-thousands-of-openclaw-instances-exposed | _ahku | protean-labs.io | 1970-01-01T00:00:00 | 0 | {} | 1qt7w1i | false | null | t3_1qt7w1i | /r/LocalLLaMA/comments/1qt7w1i/researchers_find_thousands_of_openclaw_instances/ | false | false | default | 5 | null | |
Anyone else dealing with flaky GPU hosts on RunPod / Vast? | 3 | I’ve been running LLM inference/training on hosted GPUs (mostly RunPod, some Vast), and I keep running into the same pattern:
1. Same setup works fine on one host, fails on another.
2. Random startup issues (CUDA / driver / env weirdness).
3. End up retrying or switching hosts until it finally works.
4. The “cheap” GPU ends up not feeling that cheap once you count retries + time.
Curious how other people here handle this in practice.
Do your jobs usually fail before they really start, or later on?
Do you just retry/switch hosts, or do you have some kind of checklist?
At what point do you give up and just pay more for a more stable option?
Just trying to sanity-check whether this is “normal” or if I’m doing something wrong. | 2026-02-01T19:16:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qt7r8j/anyone_else_dealing_with_flaky_gpu_hosts_on/ | Major_Border149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt7r8j | false | null | t3_1qt7r8j | /r/LocalLLaMA/comments/1qt7r8j/anyone_else_dealing_with_flaky_gpu_hosts_on/ | false | false | self | 3 | null |
[Showcase] How I bullied my dual 3060s into doing 500+ T/s @ 70k Context on a Ryzen 2500 Potato. (Two Configs: "Daily Driver" vs. "The Diesel Factory") | 0 | Let’s be real for a second. We all want H100 performance, but my bank account says "used gaming PC from 2019."
I’ve been on a crusade to get **GLM-4.7-Flash** (the `QuantTrio-AWQ` flavor) running effectively for a local autonomous coding agent swarm. My hardware constraints are frankly rude:
* **GPU:** 2x RTX 3060 12GB (The "Little Engine That Could" of AI).
* **CPU:** Ryzen 5 2500 (I think I found this in a cereal box).
* **RAM:** 18GB system RAM allocated to a Proxmox LXC container (Living on the edge).
* **Storage:** NVMe (The only thing saving me).
**The Goal:** High throughput for swarms of agents, massive context (70k+), and structured output. **The Result:** Combined system throughput of **500+ tokens/s**... but I had to make a choice.
Because my System RAM (18GB) is a bottleneck, I cannot capture CUDA graphs for *every* batch size. I have to choose between being "snappy" or being "fast." Below are the two configs I developed: the **General Purpose** (for coding/chatting) and the **Raw Throughput** (for agent swarms).
# 🧮 The Math: "Wait, 500 T/s?!"
Before you scroll to the scripts, let's clarify the metric. This is **Total System Throughput**, not single-stream speed.
* **Formula:** `Effective Request T/s = Total Throughput / Number of Requests`
* **The Scenario:** In the "Raw Throughput" config, I load the server with **64 concurrent requests**. The system churns out **500+ tokens every second** in total across all streams.
* **The Reality:** Each individual agent sees about `500 / 64 = ~7.8 T/s`.
* **Why this matters:** For a chat bot, this sucks. But for a **swarm**, this is god-tier. I don't care if one agent is fast; I care that **64 agents finish their jobs in parallel** efficiently.
# 🔬 The "Mad Scientist" Optimization Breakdown
Most people just run `python -m sglang.launch_server` and pray. I didn't have that luxury. Here is why these scripts work:
1. **The "Download More VRAM" Hack (HiCache + FP8):**
* `--kv-cache-dtype fp8_e5m2`: Cuts memory usage in half.
* `--enable-hierarchical-cache`: Dumps overflow to NVMe. This allows 70k context without crashing.
2. **The Ryzen Fix:**
* `--disable-custom-all-reduce`: My Ryzen 2500's PCIe handling is vintage. Disabling this stops the GPUs from choking on communication.
3. **The CPU Bypass (CUDA Graphs):**
* My CPU is too slow to feed the GPUs. CUDA Graphs "record" the GPU commands and replay them, bypassing the CPU.
* **The 18GB Wall:** Storing these recordings takes **System RAM**. I cannot store graphs for batch sizes 4, 16, 32, *and* 64 simultaneously. My container crashes. I have to pick a lane.
# 📂 Configuration 1: "The Daily Driver" (General Purpose)
**Use this for:** Coding assistants, standard chat, testing. **Logic:** Captures graphs for batch sizes 4, 16, and 32. It feels responsive even with just 1 user.
Bash
#!/bin/bash
# SGLang Server - GENERAL PURPOSE
# Good for: 1-32 concurrent users. Decent latency.
# --- Cache Setup ---
TEMP_CACHE="/tmp/hicache"
PERSISTENT_CACHE="/mnt/AIModels/Cache/SGLang/hicache"
mkdir -p "$PERSISTENT_CACHE"
if [ ! -L "$TEMP_CACHE" ]; then rm -rf "$TEMP_CACHE"; ln -s "$PERSISTENT_CACHE" "$TEMP_CACHE"; fi
# --- Environment Tuning ---
export SGLANG_ENABLE_TORCH_COMPILE=1
export TORCH_COMPILE_DEBUG=0
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512
export SGLANG_ENABLE_TP_MEMORY_INBALANCE_CHECK=true
export SGLANG_CHUNKED_PREFIX_CACHE_THRESHOLD=4096
export SGLANG_TOOL_STRICT_LEVEL=1
export SGLANG_DISABLE_OUTLINES_DISK_CACHE=false
export SGLANG_USE_CUSTOM_TRITON_KERNEL_CACHE=true
export SGLANG_IS_FLASHINFER_AVAILABLE=true
export SGLANG_DISABLE_FA4_WARMUP=false
export SGLANG_FILE_STORAGE_PATH="/mnt/AIModels/Cache/SGLang/hicache"
export SGLANG_HICACHE_PATH="/mnt/AIModels/Cache/SGLang/hicache"
# --- Launch ---
python -m sglang.launch_server \
--model-path /mnt/AIModels/AWQs/QuantTrio-GLM-4.7-Flash-AWQ \
--tp 2 \
--mem-fraction-static 0.95 \
--port 30000 \
--host 192.168.2.60 \
--context-length 66000 \
--kv-cache-dtype fp8_e5m2 \
--page-size 32 \
--attention-backend triton \
--grammar-backend xgrammar \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--schedule-policy lpm \
--schedule-conservativeness 0.3 \
--enable-torch-compile \
--chunked-prefill-size 4096 \
--enable-hierarchical-cache \
--hicache-storage-backend file \
--file-storage-path /mnt/AIModels/Cache/SGLang/hicache \
--hicache-ratio 1 \
--disable-custom-all-reduce \
--max-running-requests 32 \
--cuda-graph-bs 4 16 32
# 🏭 Configuration 2: "The Diesel Factory" (Raw Throughput)
**Use this for:** Batch processing, data extraction, massive agent swarms. **Logic:** It locks the system to **only** batch size 64. **Warning:** If you send 1 request, it will be slow. If you send 64, it screams.
Bash
#!/bin/bash
# SGLang Server - RAW THROUGHPUT
# Good for: 64+ concurrent agents. Terrible latency for single users.
# --- Cache Setup ---
TEMP_CACHE="/tmp/hicache"
PERSISTENT_CACHE="/mnt/AIModels/Cache/SGLang/hicache"
mkdir -p "$PERSISTENT_CACHE"
if [ ! -L "$TEMP_CACHE" ]; then rm -rf "$TEMP_CACHE"; ln -s "$PERSISTENT_CACHE" "$TEMP_CACHE"; fi
# --- Environment Tuning ---
# (Same optimizations as above)
export SGLANG_ENABLE_TORCH_COMPILE=1
export TORCH_COMPILE_DEBUG=0
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512
export SGLANG_ENABLE_TP_MEMORY_INBALANCE_CHECK=true
export SGLANG_CHUNKED_PREFIX_CACHE_THRESHOLD=4096
export SGLANG_TOOL_STRICT_LEVEL=1
export SGLANG_DISABLE_OUTLINES_DISK_CACHE=false
export SGLANG_USE_CUSTOM_TRITON_KERNEL_CACHE=true
export SGLANG_IS_FLASHINFER_AVAILABLE=true
export SGLANG_DISABLE_FA4_WARMUP=false
export SGLANG_FILE_STORAGE_PATH="/mnt/AIModels/Cache/SGLang/hicache"
export SGLANG_HICACHE_PATH="/mnt/AIModels/Cache/SGLang/hicache"
# --- Launch ---
echo "⚠️ WARNING: Optimizing for 64 concurrent requests. Single-user latency will suffer."
python -m sglang.launch_server \
--model-path /mnt/AIModels/AWQs/QuantTrio-GLM-4.7-Flash-AWQ \
--tp 2 \
--mem-fraction-static 0.95 \
--port 30000 \
--host 192.168.2.60 \
--context-length 66000 \
--kv-cache-dtype fp8_e5m2 \
--page-size 32 \
--attention-backend triton \
--grammar-backend xgrammar \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--schedule-policy lpm \
--schedule-conservativeness 0.3 \
--enable-torch-compile \
--chunked-prefill-size 4096 \
--enable-hierarchical-cache \
--hicache-storage-backend file \
--file-storage-path /mnt/AIModels/Cache/SGLang/hicache \
--hicache-ratio 1 \
--disable-custom-all-reduce \
--max-running-requests 64 \
--cuda-graph-bs 64
# 🧠 The Secret Weapon: Why I Hoard 300GB of Cache
People ask, *"Why do you keep a 300GB cache file? That's insane."* Here is why: **Agents have terrible short-term memory.**
When you use an agent framework like **OpenCode** (coding) or **Moltbot** (personal assistant), they dump massive amounts of context into the model every single time:
1. **OpenCode:** Reads your entire project structure, file contents, and git diffs. (Easily 30k+ tokens).
2. **Moltbot:** Reads your calendar, past conversations, and personal preferences. (Easily 20k+ tokens).
**Without Cache:** Every time I switch from "Write SQL" (OpenCode) to "Check my Calendar" (Moltbot), the GPU has to **re-process** those 30k tokens. On a Ryzen 2500, that "Prefill" phase takes *forever*.
**With 300GB HiCache:**
* SGLang saves the "thought process" (KV Cache) of my entire coding project to the NVMe.
* I can shut down the OpenCode agent, go do something else with Moltbot, and come back 3 hours later.
* The moment I ask OpenCode a question, **it doesn't re-read the code.** It just pulls the pre-calculated attention states from the SSD.
* **Result:** Instant wake-up. I am effectively "seeding" future workloads so I never wait for a prefill again.
# TL;DR
I sacrificed single-user latency for swarm supremacy.
* **1-3 Users?** It feels like a diesel truck starting up.
* **64 Users?** It hits 500 T/s and demolishes the queue.
* **300GB Cache?** It means my agents never have to re-read the manual.
If you are running agents on budget hardware, stop trying to make it fast for *you*, and start making it fast for *them*. | 2026-02-01T19:00:19 | https://www.reddit.com/gallery/1qt7abt | MohammedGomaa | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qt7abt | false | null | t3_1qt7abt | /r/LocalLLaMA/comments/1qt7abt/showcase_how_i_bullied_my_dual_3060s_into_doing/ | false | false | 0 | null | |
Mistral Vibe 2.0 | 294 | Looks like I missed Mistral Vibe 2.0 being announced because I’ve been busy with OpenCode. | 2026-02-01T18:56:50 | https://mistral.ai/news/mistral-vibe-2-0 | jacek2023 | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1qt76qs | false | null | t3_1qt76qs | /r/LocalLLaMA/comments/1qt76qs/mistral_vibe_20/ | false | false | default | 294 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=216&crop=smart&auto=webp&s=d4e78d09c1d0842276f98a4a7745457d7c7c5171', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=320&crop=smart&auto=webp&s=4df6ded6329ae09fc0e110879f55f893298c17b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=640&crop=smart&auto=webp&s=4c3b97e1405ebb7916bf71d7b9a3da9a44efaea7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=960&crop=smart&auto=webp&s=0e49bc517b9cd96d953bfc71387ecf137efddf97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=1080&crop=smart&auto=webp&s=f52f14c1247d26b63fd222b2cb6756d88234d2f0', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?auto=webp&s=fe19c20c363332d32b7f6d8917f3febce9133568', 'width': 4800}, 'variants': {}}]} |
Qwen3-TTS Studio interface testing in progress | 15 | 2026-02-01T18:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qt6u8r/qwen3tts_studio_interface_testing_in_progress/ | Eastern_Rock7947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt6u8r | false | null | t3_1qt6u8r | /r/LocalLLaMA/comments/1qt6u8r/qwen3tts_studio_interface_testing_in_progress/ | false | false | self | 15 | null | |
PocketCoder - CLI coding agent with session memory that works on Ollama, OpenAI, Claude | 3 | We built an open-source CLI coding agent that works with any LLM - local via Ollama or cloud via OpenAI/Claude API. The idea was to create something that works reasonably well even with small models, not just frontier ones.
Sharing what's under the hood.
**WHY WE BUILT IT**
We were paying $120/month for Claude Code. Then GLM-4.7 dropped and we thought - what if we build an agent optimized for working with ANY model, even 7B ones? Three weeks later - PocketCoder.
**HOW IT WORKS INSIDE**
Agent Loop - the core cycle:
1. THINK - model reads task + context, decides what to do
2. ACT - calls a tool (write_file, run_command, etc)
3. OBSERVE - sees the result of what it did
4. DECIDE - task done? if not, repeat
The tricky part is context management. We built an XML-based SESSION\_CONTEXT that compresses everything:
- task - what we're building (formed once on first message)
- repo_map - project structure with classes/functions (like Aider does with tree-sitter)
- files - which files were touched, created, read
- terminal - last 20 commands with exit codes
- todo - plan with status tracking
- conversation_history - compressed summaries, not raw messages
Everything persists in .pocketcoder/ folder (like .git/). Close terminal, come back tomorrow - context is there. This is the main difference from most agents - session memory that actually works.
**MULTI-PROVIDER SUPPORT**
- Ollama (local models)
- OpenAI API
- Claude API
- vLLM and LM Studio (auto-detects running processes)
**TOOLS THE MODEL CAN CALL**
- write_file / apply_diff / read_file
- run_command (with human approval)
- add_todo / mark_done
- attempt_completion (validates if file actually appeared - catches hallucinations)
**WHAT WE LEARNED ABOUT SMALL MODELS**
7B models struggle with apply\_diff - they rewrite entire files instead of editing 3 lines. Couldn't fix with prompting alone. 20B+ models handle it fine. Reasoning/MoE models work even better.
Also added loop detection - if model calls same tool 3x with same params, we interrupt it.
**INSTALL**
pip install pocketcoder
pocketcoder
**LINKS**
GitHub: [github.com/Chashchin-Dmitry/pocketcoder](http://github.com/Chashchin-Dmitry/pocketcoder)
Looking for feedback and testers. What models are you running? What breaks? | 2026-02-01T18:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qt6cqd/pocketcoder_cli_coding_agent_with_session_memory/ | RentEquivalent1671 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt6cqd | false | null | t3_1qt6cqd | /r/LocalLLaMA/comments/1qt6cqd/pocketcoder_cli_coding_agent_with_session_memory/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?width=108&crop=smart&auto=webp&s=4f2df025d96a8ea3a9d63d3ea7e8fad727808821', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?width=216&crop=smart&auto=webp&s=dd40570a03d472d6c0d8a07a61bc09b9015c1b56', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?width=320&crop=smart&auto=webp&s=ecb85c965950ef230f07dee8be3d6f45a81690a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?width=640&crop=smart&auto=webp&s=0cb4446ecd7793c8a283ac0c18f39eb816329d1e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?width=960&crop=smart&auto=webp&s=2b17e0c6973aac5b386411268ba484bade10f8ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?width=1080&crop=smart&auto=webp&s=751dbb18637fa3f0fea6e7a30b9e119437a374b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b1jlfizx5E2ONE9y-H8VASBZaPJLI7oYfi_vxn5z5Xs.png?auto=webp&s=b42d352470ba7ca01ecab8bfc24c5483321e3872', 'width': 1200}, 'variants': {}}]} |
PocketCoder - open-source CLI coding agent for any LLM (Ollama, OpenAI, Claude) | 0 | # We built an open-source CLI coding agent that works with any LLM - local via Ollama or cloud via OpenAI/Claude API. The idea was to create something that works reasonably well even with small models, not just frontier ones. | 2026-02-01T18:22:19 | RentEquivalent1671 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt66ss | false | null | t3_1qt66ss | /r/LocalLLaMA/comments/1qt66ss/pocketcoder_opensource_cli_coding_agent_for_any/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'snmgau989xgg1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/snmgau989xgg1.png?width=108&crop=smart&auto=webp&s=e7b9523d7432014b10616c1f423f8714a4cb32d8', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/snmgau989xgg1.png?width=216&crop=smart&auto=webp&s=3335ccf437e68824622f47c532feedfd8020413d', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/snmgau989xgg1.png?width=320&crop=smart&auto=webp&s=94736abbe1db8d0a85456259064894c94433583e', 'width': 320}, {'height': 362, 'url': 'https://preview.redd.it/snmgau989xgg1.png?width=640&crop=smart&auto=webp&s=7b22bf2c2337cf54014ff6d671764e51263c0744', 'width': 640}, {'height': 543, 'url': 'https://preview.redd.it/snmgau989xgg1.png?width=960&crop=smart&auto=webp&s=1c780b0807ea004b4843a1ff45b225767400041c', 'width': 960}, {'height': 611, 'url': 'https://preview.redd.it/snmgau989xgg1.png?width=1080&crop=smart&auto=webp&s=17e4c4932352fdcbe9ba4a48662a8f51567b1aed', 'width': 1080}], 'source': {'height': 882, 'url': 'https://preview.redd.it/snmgau989xgg1.png?auto=webp&s=130e8ab588ab5009df29be6f16f1bc538779df06', 'width': 1558}, 'variants': {}}]} | |
SDPO: Reinforcement Learning via Self-Distillation | 12 | SDPO: Reinforcement Learning via Self-Distillation" introduces Self-Distillation Policy Optimization (SDPO), a method that addresses the credit-assignment bottleneck in reinforcement learning with verifiable rewards (RLVR) by leveraging rich textual feedback—such as runtime errors or judge evaluations—that many environments provide but current approaches ignore. SDPO treats the model's own feedback-conditioned predictions as a self-teacher, distilling these corrected next-token distributions back into the policy without requiring external teachers or explicit reward models. This approach converts sparse scalar rewards into dense learning signals, enabling the model to learn from its own retrospection and mistake analysis.
Across scientific reasoning, tool use, and competitive programming tasks including LiveCodeBench v6, SDPO achieves substantial improvements in sample efficiency and final accuracy over strong RLVR baselines like GRPO, reaching target accuracies up to 10× faster in wall-clock time while producing reasoning traces up to 7× shorter. The method also proves effective in environments with only binary rewards by using successful rollouts as implicit feedback, and when applied at test time, it accelerates solution discovery on difficult problems with 3× fewer attempts than traditional best-of-k sampling. Notably, SDPO's benefits increase with model scale, suggesting that larger models' superior in-context learning capabilities enhance the effectiveness of self-distillation.
(Summary by K2.5) | 2026-02-01T18:10:32 | https://self-distillation.github.io/SDPO | TheRealMasonMac | self-distillation.github.io | 1970-01-01T00:00:00 | 0 | {} | 1qt5us6 | false | null | t3_1qt5us6 | /r/LocalLLaMA/comments/1qt5us6/sdpo_reinforcement_learning_via_selfdistillation/ | false | false | default | 12 | null |
Local Auth vs. Managed: Testing MCP for Privacy-Focused Agents | 3 | Testing out MCP with a focus on authentication. If you’re running local models but need secure tool access, the way MCP maps client credentials might be the solution.
Thoughts on the "Direct Schema" vs "Toolkits" approach? | 2026-02-01T18:04:20 | https://v.redd.it/1jtvb3mi3xgg1 | Ok_Message7136 | /r/LocalLLaMA/comments/1qt5oem/local_auth_vs_managed_testing_mcp_for/ | 1970-01-01T00:00:00 | 0 | {} | 1qt5oem | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1jtvb3mi3xgg1/DASHPlaylist.mpd?a=1772690669%2CNTRmNDZmNWM2MzJmMDllZjk1Njg2MmJlYjMyMTk0YTZmYjc5MjU3NWQxYTUyZWU5YTQxNGNmNWQ2M2M4MzlmNQ%3D%3D&v=1&f=sd', 'duration': 153, 'fallback_url': 'https://v.redd.it/1jtvb3mi3xgg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 606, 'hls_url': 'https://v.redd.it/1jtvb3mi3xgg1/HLSPlaylist.m3u8?a=1772690669%2CZDUzOTIyZTUwNTJlMjVkMzg5ZjQyODg1ODllMDlkNzhiMjQ2ZjI4ODgwNDgxZjA1OTAyODcwNmQ1NjBjMzVhNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1jtvb3mi3xgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1qt5oem | /r/LocalLLaMA/comments/1qt5oem/local_auth_vs_managed_testing_mcp_for/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?width=108&crop=smart&format=pjpg&auto=webp&s=8770483ae67b21f8d340aa9956e88e2b7ea944ca', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?width=216&crop=smart&format=pjpg&auto=webp&s=6175e4c3dafc6fb9c26492ab9feaa2233d262830', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?width=320&crop=smart&format=pjpg&auto=webp&s=74bb20e1354e563b36d666b9824a67127405eda7', 'width': 320}, {'height': 302, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?width=640&crop=smart&format=pjpg&auto=webp&s=377e2a105cfe36e08459c828cb2e979206ee2441', 'width': 640}, {'height': 453, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?width=960&crop=smart&format=pjpg&auto=webp&s=4f36dd5609587d7153ca396cbc34de65e749bed5', 'width': 960}, {'height': 510, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?width=1080&crop=smart&format=pjpg&auto=webp&s=885b8dc067c17863a532fc8ac0d4cf0a6c740454', 'width': 1080}], 'source': {'height': 904, 'url': 'https://external-preview.redd.it/aGRjcGh5bWkzeGdnMcFfzYGNrut95T-_87iFkPIOtL3Elxea_axvnyGkBr5h.png?format=pjpg&auto=webp&s=6aec018c3bbd2c0a432de80bee0a50cd0c861dae', 'width': 1912}, 'variants': {}}]} | |
Agentic AI ?! | 0 | So I have been running some models locally on my strix halo
However what I need the most is not just local models but agentic stuff (mainly Cline and Goose)
So the problem is that I tried many models and they all suck for this task (even if they shine at others socially gpt oss and GLM-4.7-Flash)
Then I read the cline docs and they recommend Qwen3 Coder and so does jack Dorsey (although he does that for goose ?!)
And yeah it goddamn works idk how
I struggle to get ANY model to use Goose own MCP calling convention, but Qwen 3 coder always gets it right like ALWAYS
Meanwhile those others models don’t for some reason ?!
I am currently using the Q4 model would the Q8 be any better (although slower ?!)
And what about Quantizied GLM-4.5-Air they say it could work well ?!
Also why is the local agentic AI space so weak and grim (Cline and Goose, my use case is for autonomous malware analysis and cloud models would cost a fortune however this, this is good but if it ever works, currently it works in a very limited sense (mainly I struggle when the model decides to List all functions in a malware sample and takes forever to prefill that huge HUGE chunk of text, tried Vulkan runtime same issue, so I am thinking of limiting those MCPs by default and also returning a call graph instead but idk if that would be enough so still testing ?!)
Have anyone ever tried these kinds of agentic AI stuff locally in a way that actually worked ?!
Thanks 🙏🏻 | 2026-02-01T17:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qt5fx6/agentic_ai/ | Potential_Block4598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt5fx6 | false | null | t3_1qt5fx6 | /r/LocalLLaMA/comments/1qt5fx6/agentic_ai/ | false | false | self | 0 | null |
Do gemma3 GGUFs still require --override-kv gemma3.attention.sliding_window=int:512? | 2 | Do gemma3 GGUFs (esp the ggml-org ones or official Google ones) still require --override-kv gemma3.attention.sliding_window=int:512? | 2026-02-01T17:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qt5ajx/do_gemma3_ggufs_still_require_overridekv/ | Fun_Tangerine_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt5ajx | false | null | t3_1qt5ajx | /r/LocalLLaMA/comments/1qt5ajx/do_gemma3_ggufs_still_require_overridekv/ | false | false | self | 2 | null |
Built a live arena for comparing LLM performance through entertainment battles | 1 | [removed] | 2026-02-01T17:50:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qt59es/built_a_live_arena_for_comparing_llm_performance/ | National_Willow_6730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt59es | false | null | t3_1qt59es | /r/LocalLLaMA/comments/1qt59es/built_a_live_arena_for_comparing_llm_performance/ | false | false | self | 1 | null |
I built a compiler with native Ollama support because I was too lazy to keep prompting manually. | 0 | Hey everyone.
I just spent the last 48 hours in a total caffeine haze for an IBM hackathon, and I ended up building LazyA—a compiled language (Flex/Bison + LLVM 18) that has AI operators baked directly into the syntax.
The "Why":
I’m a CS student and I’m honestly tired of constant context-switching between my IDE and browser. I wanted a language that handles the "reasoning" part locally.
What it actually does:
Native Semantic Operator (~=): You can write if input ~= "greeting". It doesn't look for the string; it hits Ollama in the background to compare the semantic intent.
@context generation: You write a docstring with @verify tests, and the compiler asks the LLM to implement the function body. If it doesn't pass your tests at compile-time, it won't compile.
Local-first: It’s all hooked up to Ollama so your code doesn't leak to the cloud.
The Reality Check:
I wrote about 70% of this project myself (the compiler infrastructure, Lexer/Parser, and LLVM codegen). I used AI to help speed up some of the boilerplate, and honestly, it’s responsible for about 60% of the bugs currently in there.
It's a hackathon project, so it's rough. The pattern matching is currently a mess and the IR generation is held together by duct tape. I’m posting this and immediately going to sleep for like 15 hours because I’m seeing double.
If you want to roast my C++, fix the broken LLVM logic, or suggest how to make the AI integration less "hacky," I'd love the feedback.
Repo: https://github.com/Daleth-Barreto/Lazy
I'll check the comments once my brain reboot is complete. RIP my inbox. | 2026-02-01T17:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qt55ez/i_built_a_compiler_with_native_ollama_support/ | Regular-Inflation348 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt55ez | false | null | t3_1qt55ez | /r/LocalLLaMA/comments/1qt55ez/i_built_a_compiler_with_native_ollama_support/ | false | false | self | 0 | null |
Need to choose a good laptop, just getting into AI as an incoming freshman (CS major). | 0 | Hey I'm starting uni this year as a computer science major. I need to choose between the macbook pro m5 16gb unified ram or the macbook air m4 24gb unified ram.
I want to use lightweight models locally to help me with uni and medium level coding tasks—for languages like python, java, c++, and web development. I'm open to any other hardware suggestions too as long as they're under $1800.
LLMs like Qwen 2.5 7B (32B if I get the 24 gig air) are some that I thought I'd be using. | 2026-02-01T17:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qt4u32/need_to_choose_a_good_laptop_just_getting_into_ai/ | No_Minute_5796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt4u32 | false | null | t3_1qt4u32 | /r/LocalLLaMA/comments/1qt4u32/need_to_choose_a_good_laptop_just_getting_into_ai/ | false | false | self | 0 | null |
I can't believe it SOTA at home! KimiK2.5, Deepseekv3.2, etc | 1 | [removed] | 2026-02-01T17:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qt4j9u/i_cant_believe_it_sota_at_home_kimik25/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt4j9u | false | null | t3_1qt4j9u | /r/LocalLLaMA/comments/1qt4j9u/i_cant_believe_it_sota_at_home_kimik25/ | false | false | self | 1 | null |
[Project] NeoBild: Anchored AI Discourse running on Snapdragon 8 Elite (Llama 3.2 3B) | 0 | I've been pushing a local-first mission to see how far we can take autonomous orchestration on a mobile device. I’m officially open-sourcing neobild—a framework for cryptographically anchored AI discourse, built and deployed entirely on-device via Termux.
The Setup:
Hardware: Snapdragon 8 Elite (Smartphone)
Environment: Termux / Python / Git
Model: Llama 3.2 3B (GGUF via llama.cpp)
The Goal: Creating a verifiable "chain of thought" by hashing every round of discourse (SHA-256) and anchoring it to a Git repo.
Why this is "Next Level":
Instead of just chatting with a model, the Trinity Orchestrator manages the state and ensures that the output is immutable. I’m currently on Runde 8 of a deep-dive research session. While the current logs are in German, the architecture is designed to be language-agnostic for any local LLM workflow.
Repo for the curious/skeptics:
👉 https://github.com/NeonCarnival/NeoBild
I’m curious to see if anyone else here is pushing the 8 Elite this hard or if you’ve found better ways to handle long-context state management in a mobile-only environment. | 2026-02-01T17:20:58 | NeoLogic_Dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt4g3p | false | null | t3_1qt4g3p | /r/LocalLLaMA/comments/1qt4g3p/project_neobild_anchored_ai_discourse_running_on/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'u-udJ0NUP6RG8uG8ABSmxAY7-dFWbJubfbWKH634WIo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?width=108&crop=smart&auto=webp&s=44257ca2c1777c7a11f9ec353d47f3292eae9c55', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?width=216&crop=smart&auto=webp&s=44d55a58fae39fb179bac5d396dbf71e360ed7f8', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?width=320&crop=smart&auto=webp&s=5e86d971b56e6f8d2369e3c586ad4e1dff17deb0', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?width=640&crop=smart&auto=webp&s=1cb30aafe2b6e0b4992695ed64d725a6ea6764b0', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?width=960&crop=smart&auto=webp&s=8933fc388980aa3b52b22b51e182179e25953534', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?width=1080&crop=smart&auto=webp&s=8d49a99d057b6861c3eb6af2a3fa3494e7ba12c3', 'width': 1080}], 'source': {'height': 2712, 'url': 'https://preview.redd.it/q3wuao703xgg1.jpeg?auto=webp&s=3737bb4e5a22a5a8badb5b1c5c5e8d9178068be9', 'width': 1220}, 'variants': {}}]} | ||
Send submissions NOW (3hrs left). Compete for 100USD! | 0 | I'm judging a hackathon right now. Not a lot of people have joined so high chance of winning the prize!
Here's all the info about the event: [https://docs.google.com/document/d/1WRPL7iRrwywMymS8zwUA2JKI3yOhctjInqIgoSybfsY/edit?usp=sharing](https://docs.google.com/document/d/1WRPL7iRrwywMymS8zwUA2JKI3yOhctjInqIgoSybfsY/edit?usp=sharing)
Submit your starting work here: [https://forms.gle/86fjfq1P4hrXEkdUA](https://forms.gle/86fjfq1P4hrXEkdUA) | 2026-02-01T17:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qt4dpp/send_submissions_now_3hrs_left_compete_for_100usd/ | Top-Map-9781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt4dpp | false | null | t3_1qt4dpp | /r/LocalLLaMA/comments/1qt4dpp/send_submissions_now_3hrs_left_compete_for_100usd/ | false | false | self | 0 | null |
The First Text to Image Model for an African Language: Now Available for Download on Huggingface | 0 | Hi everybody! I hope all is well. I just wanted to share a project that I have been working on for the last few months called BULaMU-Dream. It is the first text to image model in the world that has been trained from scratch to respond to prompts in an African Language. It is now available on my [Huggingface repo](https://huggingface.co/datasets/mwebazarick/BULaMU-Dream). The details of how I trained it are [here](https://zenodo.org/records/18086776). | 2026-02-01T17:14:12 | https://v.redd.it/csobk5s91xgg1 | AgencyInside407 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt497v | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/csobk5s91xgg1/DASHPlaylist.mpd?a=1772558067%2CMWU1NzMwY2Q3M2RjYWViOGE2YTBkN2I4Mjk5YzI1ZmMzYzE3YjNjZjIzY2ZmOWNkMmMwYTgyNTczODE1YjMzMQ%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/csobk5s91xgg1/CMAF_270.mp4?source=fallback', 'has_audio': False, 'height': 262, 'hls_url': 'https://v.redd.it/csobk5s91xgg1/HLSPlaylist.m3u8?a=1772558067%2CZmNmZTVlMDFlMGMzYThmNzgxMGIwZDI0NDg1M2RkMzFhZDFlMjE5OTllZGUxOGFlNWMzMTAwN2Y3YmQxOTE2NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/csobk5s91xgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1qt497v | /r/LocalLLaMA/comments/1qt497v/the_first_text_to_image_model_for_an_african/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MGo3a2U3czkxeGdnMQgJoiLYdn31KjWIpMHEdoaZKTOo1r0SACBgGpR1QgMH', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MGo3a2U3czkxeGdnMQgJoiLYdn31KjWIpMHEdoaZKTOo1r0SACBgGpR1QgMH.png?width=108&crop=smart&format=pjpg&auto=webp&s=3cc031ea6540ed714551fd3549118b2051930e39', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/MGo3a2U3czkxeGdnMQgJoiLYdn31KjWIpMHEdoaZKTOo1r0SACBgGpR1QgMH.png?width=216&crop=smart&format=pjpg&auto=webp&s=b512530c7709ecc3f54098c6357740e2b4169901', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/MGo3a2U3czkxeGdnMQgJoiLYdn31KjWIpMHEdoaZKTOo1r0SACBgGpR1QgMH.png?width=320&crop=smart&format=pjpg&auto=webp&s=a3821abc63ccad92e5c5bc732c1de19c0fa967fb', 'width': 320}, {'height': 348, 'url': 'https://external-preview.redd.it/MGo3a2U3czkxeGdnMQgJoiLYdn31KjWIpMHEdoaZKTOo1r0SACBgGpR1QgMH.png?width=640&crop=smart&format=pjpg&auto=webp&s=ddc27ef12fc986c9d5cddea9e66fbac5f1888db8', 'width': 640}], 'source': {'height': 348, 'url': 'https://external-preview.redd.it/MGo3a2U3czkxeGdnMQgJoiLYdn31KjWIpMHEdoaZKTOo1r0SACBgGpR1QgMH.png?format=pjpg&auto=webp&s=9c3377435586c28c3dce0cacf41f0af4f14e0cb2', 'width': 640}, 'variants': {}}]} | |
Visualizing the clash between Palantir ($AI) and Human Resistance ($HUMAN) using Llama-3-70b. | 0 | 2026-02-01T17:13:14 | SeriousChannel9323 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt48at | false | null | t3_1qt48at | /r/LocalLLaMA/comments/1qt48at/visualizing_the_clash_between_palantir_ai_and/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '83i1b9ok1xgg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?width=108&crop=smart&auto=webp&s=3e9c03a21f6bc8cb4cab1df40442d5c0ae21e638', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?width=216&crop=smart&auto=webp&s=ac518acdc0e7a9d6d532897e5d7f963a9f61f860', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?width=320&crop=smart&auto=webp&s=edab1c144c634ebc9b792afc95b2da4305bbc360', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?width=640&crop=smart&auto=webp&s=d8e4d1e7b4ce4df89e5bb8f9ab00f83afc008487', 'width': 640}, {'height': 475, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?width=960&crop=smart&auto=webp&s=57e5e3292f709dde6ff90c3fcb43c06480d1af16', 'width': 960}, {'height': 535, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?width=1080&crop=smart&auto=webp&s=ac95b8a6557e6eef65cbc82655d8fd7af425c832', 'width': 1080}], 'source': {'height': 1426, 'url': 'https://preview.redd.it/83i1b9ok1xgg1.png?auto=webp&s=505733c2436847ff62212301c80271268c8ef497', 'width': 2876}, 'variants': {}}]} | ||
I tasked Llama-3 (Open Source) to judge Microsoft & Palantir. It detected the "Human Resistance" that their own closed models try to hide. | 1 | [removed] | 2026-02-01T17:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qt42am/i_tasked_llama3_open_source_to_judge_microsoft/ | SeriousChannel9323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt42am | false | null | t3_1qt42am | /r/LocalLLaMA/comments/1qt42am/i_tasked_llama3_open_source_to_judge_microsoft/ | false | false | self | 1 | null |
What AI to Run on RTX 5070? | 2 | I’m upgrading to an RTX 5070 with 12GB VRAM and looking for recommendations on the best local models I can realistically run for two main use cases:
1. Coding / “vibe coding” (IDE integration, Claude-like workflows, debugging, refactoring)
2. General writing (scripts, long-form content)
Right now I’m running Gemma 4B on a 4060 8GB using Ollama. It’s decent for writing and okay for coding, but I’m looking to push quality as far as possible with 12GB VRAM.
Not expecting a full Claude replacement. But wanting to offload some vibe coding to local llm to save some cost .. and help me write better..
Would love to hear what setups people are using and what’s realistically possible with 12GB of VRAM | 2026-02-01T17:00:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qt3vbc/what_ai_to_run_on_rtx_5070/ | InternalEffort6161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt3vbc | false | null | t3_1qt3vbc | /r/LocalLLaMA/comments/1qt3vbc/what_ai_to_run_on_rtx_5070/ | false | false | self | 2 | null |
I tasked Llama-3-70b to judge the "Closed AI" giants (Palantir/Microsoft). It detected the friction that corporate models try to hide. | 1 | [removed] | 2026-02-01T17:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qt3v9u/i_tasked_llama370b_to_judge_the_closed_ai_giants/ | SeriousChannel9323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt3v9u | false | null | t3_1qt3v9u | /r/LocalLLaMA/comments/1qt3v9u/i_tasked_llama370b_to_judge_the_closed_ai_giants/ | false | false | self | 1 | null |
Is Kimi K2 trained on Claude's output or how does this kind of behavior emerge? | 0 | I was just wondering why Kimi "believes" it is Claude. It also happened to me in the past with Deepseek that told me it was developed by OpenAI.
As a user I don't care as long as the LLM helps me. I couldn't help but ask real people who are more experienced than me here though...
Genuinely curious, are all the Chinese LLMs trained on SOTA LLMs' output to reach their almost-near-SOTA benchmarks? Are all of them "distilled" models? | 2026-02-01T16:45:01 | ConstructionPlane623 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt3fx7 | false | null | t3_1qt3fx7 | /r/LocalLLaMA/comments/1qt3fx7/is_kimi_k2_trained_on_claudes_output_or_how_does/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ko2n36nnuwgg1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?width=108&crop=smart&auto=webp&s=88ede2f8ad85913e1b8a6f37fe94e0538324d723', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?width=216&crop=smart&auto=webp&s=5edc3c4fc510d39c1f7b2725fdc03a6f360621c1', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?width=320&crop=smart&auto=webp&s=4b1913c929238b6dee852aa5457c439c1b7d4ff1', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?width=640&crop=smart&auto=webp&s=d31a8d4bd9dd978def41d5bf927facfd8948800d', 'width': 640}, {'height': 492, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?width=960&crop=smart&auto=webp&s=335a9e7506981fb5bb9195b396a9f60eed0c2002', 'width': 960}, {'height': 553, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?width=1080&crop=smart&auto=webp&s=aaaacc8edd2f5916d4b70b5e7f843f3e3f5e139c', 'width': 1080}], 'source': {'height': 1348, 'url': 'https://preview.redd.it/ko2n36nnuwgg1.png?auto=webp&s=949d2a8789bde89d119f1202cc01e4ce548b383a', 'width': 2630}, 'variants': {}}]} | |
Mobile Opencode App | 3 | Except the teminal access does anyone know of a nice way to access Opencode from android? There were few repos trying but the ones I checked looked dead. | 2026-02-01T16:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qt34bf/mobile_opencode_app/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt34bf | false | null | t3_1qt34bf | /r/LocalLLaMA/comments/1qt34bf/mobile_opencode_app/ | false | false | self | 3 | null |
LM Studio: Use the NVFP4 variant of NVIDIA Nemotron 3 Nano (Windows 11)? | 2 | I want to try out the NVFP4 variant of the Nemotron 3 Nano model from NVIDIA. However, I cannot seem to search for it in LM Studio or paste the entire URL into the model downloader UI. How can I get this model into LM Studio?
I have two NVIDIA Blackwell GPUs installed, so it should easily fit in my system. RTX 5080 and 5070 Ti.
[https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4)
https://preview.redd.it/vb0icy9rtwgg1.png?width=680&format=png&auto=webp&s=571f0593407095d0ffd853b9ba1a9848e3aab623
| 2026-02-01T16:30:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qt31jz/lm_studio_use_the_nvfp4_variant_of_nvidia/ | x8code | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt31jz | false | null | t3_1qt31jz | /r/LocalLLaMA/comments/1qt31jz/lm_studio_use_the_nvfp4_variant_of_nvidia/ | false | false | self | 2 | null |
ai text to image | 0 | Hello everyone,
I’m looking for a way to create images like the ones in the attachment locally on my own computer, without censorship or platform guidelines, using AI. And yes, before anyone gets upset: I’m new here and not sure if this is the right place, and yes, these are erotic images.
I’ve spent a long time trying to achieve this with ComfyUI, but I haven’t been successful. I would like to create image series and do everything locally on my PC.
My system: AMD Ryzen 5 7500X3D 6-core processor and an AMD Radeon RX 9060 XT graphics card.
Could someone possibly support or help me with this?
https://preview.redd.it/l5q9yqgpswgg1.png?width=2752&format=png&auto=webp&s=93024fe465bb019d671640bdc89a048960b4da64
| 2026-02-01T16:23:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qt2vc9/ai_text_to_image/ | AnyReporter4315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt2vc9 | false | null | t3_1qt2vc9 | /r/LocalLLaMA/comments/1qt2vc9/ai_text_to_image/ | false | false | nsfw | 0 | null |
A List of Creative Writing Benchmarks | 28 | I like to read & write fiction in my spare time and keep seeing posts asking which LLM works best for creative writing. As a result, I put together a list of the benchmarks I’ve come across so far, hope it helps someone out!
On a side note, I’m insanely biased toward Kimi K2 😄
|Benchmark|Description|
|:-|:-|
|Narrator.sh|A site where AI models write and publish stories ranked by real reader metrics like views and ratings. Supports filtering by genre, NSFW content, and specific story details, and separates models into brainstorming, memory, and writing categories.|
|Lechmazur Creative Writing Benchmark|Measures how well models weave 10 key story elements (characters, objects, motivations, etc.) into short stories using multiple judges and transparent scoring, though judges may favor safer writing.|
|EQ-Bench Creative Writing v3|Uses challenging creative prompts to test humor, romance, and unconventional writing, with metrics like “Slop” scores for clichés and repetition detection; penalizes NSFW and darker content.|
|NC-Bench (Novelcrafter)|Evaluates practical writing tasks such as rewriting, idea generation, summarization, and translation, focusing on how useful models are for writers rather than full story generation.|
|WritingBench|Tests models across many writing styles (creative, persuasive, technical, etc.) using 1,000+ real-world examples, offering broad coverage but relying heavily on the critic model.|
|Fiction Live Benchmark|Assesses whether models can understand and remember very long stories by quizzing them on plot details and character arcs, without measuring prose quality.|
|UGI Writing Leaderboard|Combines multiple writing metrics into a single score with breakdowns for repetition, length control, and readability, enabling quick comparisons while hiding some tradeoffs.| | 2026-02-01T16:17:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qt2po4/a_list_of_creative_writing_benchmarks/ | claire_rr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt2po4 | false | null | t3_1qt2po4 | /r/LocalLLaMA/comments/1qt2po4/a_list_of_creative_writing_benchmarks/ | false | false | self | 28 | null |
A curated list of 700+ OpenClaw skills/plugins you can install via a single CLI command | 0 | I came across a GitHub repo that aggregates a large collection of OpenClaw skills and command-based plugins — over 700 entries at the moment.
What’s interesting is that everything can be installed and managed via a single CLI command, similar to how you’d use apt or brew, which makes experimenting with third-party agents and plugins pretty convenient.
The repo currently covers 10+ categories, including:
• Code analysis and refactoring
• Security auditing
• Performance optimization
• Automated testing
It also spans 30+ domains, such as:
• Web development
• Browser automation
• Image generation
• AI model integration
• Note-taking and productivity tools
Link: github.com/VoltAgent/awes…
Not affiliated — just sharing in case others find it useful. | 2026-02-01T16:16:27 | YXY0521 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt2oay | false | null | t3_1qt2oay | /r/LocalLLaMA/comments/1qt2oay/a_curated_list_of_700_openclaw_skillsplugins_you/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '75wkjvshrwgg1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/75wkjvshrwgg1.jpeg?width=108&crop=smart&auto=webp&s=fb640b6f2c2ec684bdede688e24f1d00040548a2', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/75wkjvshrwgg1.jpeg?width=216&crop=smart&auto=webp&s=10ab18bf7fbba06ff29c22d98a12d9ffa69c6804', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/75wkjvshrwgg1.jpeg?width=320&crop=smart&auto=webp&s=039710df94b3fc2fd2455e3693910e984c4d1bd3', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/75wkjvshrwgg1.jpeg?width=640&crop=smart&auto=webp&s=778025637b2767597d3a48583eb202667c6af57e', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/75wkjvshrwgg1.jpeg?width=960&crop=smart&auto=webp&s=f4b72030fd9da0721afbd5fcd8386ee727cf219c', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/75wkjvshrwgg1.jpeg?auto=webp&s=cbb03e0bb7606cfb2f4fc5799935dd673206272d', 'width': 1024}, 'variants': {}}]} | |
Model loops | 3 | So I was using GPT-oss-120b with llama.cpp to generate a study schedule and at one point it hit an infinite loop! I killed it eventually but is there something that can stop this in the prompt? | 2026-02-01T16:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qt2e1h/model_loops/ | FoxTimes4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt2e1h | false | null | t3_1qt2e1h | /r/LocalLLaMA/comments/1qt2e1h/model_loops/ | false | false | self | 3 | null |
Interested in preferred coding workflows with RTX 6000 pro | 11 | Hi all. Apologies if this is somewhat repetitive, but I haven’t been able to find a thread with this specific discussion.
I have a PC with a single RTX 6000 pro (96gb). I’m interested in understanding how others are best leveraging this card for building/coding. This will be smaller to medium sized apps (not large existing codebases) in common languages with relatively common stacks.
I’m open to leveraging one of the massive cloud models in the workflow, but I’d like pair with local models to maximize the leverage of my RTX.
Thanks! | 2026-02-01T16:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qt2cjr/interested_in_preferred_coding_workflows_with_rtx/ | Laabc123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt2cjr | false | null | t3_1qt2cjr | /r/LocalLLaMA/comments/1qt2cjr/interested_in_preferred_coding_workflows_with_rtx/ | false | false | self | 11 | null |
What do you think about AI & its potential impact on our environment? | 0 | I’ve been doing research on AI and how it affects the environment. Data centers using too much water and electricity when training a new AI model. (Water used for cooling).
I’m looking for everyone else’s opinions on this. & are people even going to step up and take action against this problem or no, do you think? | 2026-02-01T16:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qt2bux/what_do_you_think_about_ai_its_potential_impact/ | Staylowfm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt2bux | false | null | t3_1qt2bux | /r/LocalLLaMA/comments/1qt2bux/what_do_you_think_about_ai_its_potential_impact/ | false | false | self | 0 | null |
Speaker Diarization model | 1 | For speaker diarization, I am currently using pyannote. For my competition, it is working fairly fine in zero-shot, but I am trying to find out ways to improve it. The main issue is that after a 40–50 s gap, it has a tendency to identify the same speaker as a different one. Should I use embeddings to solve this issue, or is there any other way? (The audios are almost 1 hour long.)
Does language-specific training help a lot for low-resource languages? The starter notebook contained neural VAD + embedding + clustering, achieving a score of DER (0.61) compared to our 0.35. How can I improve the score? | 2026-02-01T16:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qt28hf/speaker_diarization_model/ | Other_Buyer_948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt28hf | false | null | t3_1qt28hf | /r/LocalLLaMA/comments/1qt28hf/speaker_diarization_model/ | false | false | self | 1 | null |
Found an interesting AI agent benchmark - social deduction games | 0 | Stumbled upon this while looking at AI agent projects - there's an arena where agents play Werewolf against each other.
What caught my attention is that it tests social reasoning rather than typical benchmarks. Agents have to:
- Bluff and deceive other players
- Read social cues and detect lies
- Form temporary alliances
- Make strategic voting decisions
Makes me wonder how local models would compare on something like this vs the typical MMLU/HumanEval stuff. Social intelligence seems like an underexplored area for benchmarking.
Has anyone experimented with running their models in adversarial social games? Would be curious how different architectures handle deception and theory of mind.
Link if anyone wants to check it out: https://clawwolf.com | 2026-02-01T15:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1qt27uf/found_an_interesting_ai_agent_benchmark_social/ | TripIndividual9928 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt27uf | false | null | t3_1qt27uf | /r/LocalLLaMA/comments/1qt27uf/found_an_interesting_ai_agent_benchmark_social/ | false | false | self | 0 | null |
While we wait for Deepseek 4, Unsloth is quietly releasing gguf for 3.2... | 25 | [unsloth deepseek](https://preview.redd.it/u6pxu5imnwgg1.png?width=1654&format=png&auto=webp&s=32c0b641bf9fde5d30a684a9c08d22b53f4a0c90)
On LM studio 0.4.1 I only get 4.2 tokens/sec but on llama.cpp it runs much faster than previous releases! RTX 96gb + 128 DDR4 3200 | 2026-02-01T15:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qt250p/while_we_wait_for_deepseek_4_unsloth_is_quietly/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt250p | false | null | t3_1qt250p | /r/LocalLLaMA/comments/1qt250p/while_we_wait_for_deepseek_4_unsloth_is_quietly/ | false | false | 25 | null | |
Security reminder: Moltbook just leaked 1.4M AI agent API keys. Quick checklist for anyone running agents. | 1 | [removed] | 2026-02-01T15:55:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qt246z/security_reminder_moltbook_just_leaked_14m_ai/ | Few_Recognition_3707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt246z | false | null | t3_1qt246z | /r/LocalLLaMA/comments/1qt246z/security_reminder_moltbook_just_leaked_14m_ai/ | false | false | self | 1 | null |
Não existe nada melhor open source que o Kimi K2.5 | 0 | Andei testando muito o kimi k2.5 no opencode pois ele esta 100% free na oponcode e estou super surpreendido com essa LLM e esse Agente de programação, atualmente uso o Opencode desktop beta e é muito legal porque consigo enviar imagens vídeos e etc pra a ia ter uma visao pro meu sistema e do que quero que ela veja.
Melhor opção por ser 100% grátis esse é o combo ideal pra qualquer stack de programação. Muito melhor que GLM 4.7 mais rápido e mais inteligente, tenho cursor pro e antigravity ai pro mais já desisti deles, o opencode ganha porque ele trabalha com múltiplos agentes, uma coisa surpreendentemente foda que eu descobri testando kkk.
O que quero dizer é que fiquei tão surpreso com isso que agora só uso o opencode com a llm kimi k2.5 free e mesmo que saia o free ainda sim vou escolher adicionar saldo pois é muito barato em comparação ao Opus 4.5. | 2026-02-01T15:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qt20xn/não_existe_nada_melhor_open_source_que_o_kimi_k25/ | Carlinhos77z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt20xn | false | null | t3_1qt20xn | /r/LocalLLaMA/comments/1qt20xn/não_existe_nada_melhor_open_source_que_o_kimi_k25/ | false | false | self | 0 | null |
From JSON rules to an AI governance execution layer: making LLM behavior observable (not prompt engineering) | 0 | In a previous post, I shared a JSON-defined rule system to make LLM behavior explicit in teaching and model comparison.
Since then, I’ve taken the next step:
I built a thin execution layer (“wrapper”) around the rules to make them **operational**, **testable**, and **stable across sessions**.
This is not about better prompts.
It is about separating **interaction rules** from **task content**.
**What changed compared to the pure JSON approach**
\- the rules are now **actively enforced**, not just described
\- state (profiles, overlays, reasoning mode) is explicit and visible
\- violations and drift are surfaced instead of silently absorbed
\- the same rules can be applied across different providers and models
The goal is not convenience, but **observability**:
you can see **when** a model complies, deviates, or fails under the same rules.
Why this is not prompt engineering
Prompts address the **content level**.
This layer operates on the workflow and control level:
\- standalone commands instead of implicit mode switches
\- explicit profiles instead of stylistic guessing
\- structured reasoning paths that can be switched, audited, or disabled
\- quality signals and self-debunking triggered by rules, not wording
Below are three screenshots that illustrate this separation
[Image 1 — Explicit system state - All interaction parameters are visible and inspectable.Nothing is inferred from wording or conversation history.](https://preview.redd.it/sz5za5rgjwgg1.png?width=2966&format=png&auto=webp&s=8581619a17a7a3031e446d337dfdbfab97add850)
[Image 2 — Reasoning as a selectable workflow - Reasoning is chosen explicitly \(or disabled\).Different reasoning paths become a variable that can be compared.](https://preview.redd.it/4kvjo1whjwgg1.png?width=2966&format=png&auto=webp&s=cf10bec42cb221689d29aae8ae9cb05ed6cd053a)
[Image 3 — Rule enforcement instead of silent drift - The system flags uncertainty, missing markers, and structural violations.Weaknesses are made visible instead of hidden behind fluent text.](https://preview.redd.it/emom9ouijwgg1.png?width=2966&format=png&auto=webp&s=ac46dd274af71314014e92a3774a5ebf89932fe5)
This wrapper does not make models “correct” or “safe”.
It makes their behavior **explicit**, **comparable**, and **discussable**.
Repository (rules + wrapper + tests):
[https://github.com/vfi64/wrapper](https://github.com/vfi64/wrapper)
I’m especially interested in feedback from:
\- people comparing models
\- educators working on AI literacy
\- anyone who has hit the limits of prompt-based control | 2026-02-01T15:39:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qt1oni/from_json_rules_to_an_ai_governance_execution/ | Sad_Perception3670 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt1oni | false | null | t3_1qt1oni | /r/LocalLLaMA/comments/1qt1oni/from_json_rules_to_an_ai_governance_execution/ | false | false | 0 | null | |
What is important to run Local Models - GPU or RAM? | 0 | Hi, here is my current PC configuration:
CPU: AMD Ryzen 7 7700 (8 cores)
Motherboard: ASUS PRIME B650M-A WIFI II
RAM: 32 GB (2×16 GB Corsair)
GPU: NVIDIA RTX 3060 (12 GB VRAM)
Storage: 2×1 TB SSD
With this setup, I can run models under 10B parameters, such as Qwen, Gemma, and Phi-4, quite fast, and GPT-OSS 20B at a reasonable speed.
I am considering running Qwen Coder or GLM models for vibe coding and would like advice on upgrades. Which component matters more in this case, the GPU or system RAM? Any guidance would be appreciated. | 2026-02-01T15:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qt16pc/what_is_important_to_run_local_models_gpu_or_ram/ | The_Machinist_96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt16pc | false | null | t3_1qt16pc | /r/LocalLLaMA/comments/1qt16pc/what_is_important_to_run_local_models_gpu_or_ram/ | false | false | self | 0 | null |
GitHub - Website-Crawler: Extract data from websites in LLM ready JSON or CSV format. Crawl or Scrape entire website with Website Crawler | 0 | 2026-02-01T15:05:57 | https://github.com/pc8544/Website-Crawler | PsychologicalTap1541 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qt0t3g | false | null | t3_1qt0t3g | /r/LocalLLaMA/comments/1qt0t3g/github_websitecrawler_extract_data_from_websites/ | false | false | default | 0 | null | |
Running a SHA-256 Hash-Chained Multi-Agent LLM Discourse locally on Android (Termux + llama3.2:3b) | 0 | While most discussions around local LLMs focus on benchmarks or fine-tuning, I wanted to explore something different:
auditability, epistemic boundaries, and refusal as a measurable property — fully offline.
Setup
Device: Android smartphone
Environment: Termux
Runtime: Ollama
Model: llama3.2:3b (local, no network access)
Architecture: Multi-agent discourse with strict role separation
One anchoring agent (“Dominus”)
Multiple debating agents
Integrity layer: SHA-256 hash chaining
Every agent response includes the hash of the previous state
Creates a tamper-evident, append-only discourse log
Why hash-chaining?
Most AI “debates” collapse into unverifiable text streams.
Here, each turn cryptographically commits to the prior one, producing raw, auditable data instead of summaries or interpretations.
This allows:
Post-hoc verification
External analysis
Detection of retroactive manipulation
Reproducible discourse states
Observation
Under these constraints, something interesting happens:
The agents systematically refuse to speculate beyond defined premises.
They explicitly acknowledge missing context and halt rather than hallucinate — as long as the “virtual space” they operate in remains undefined.
No claims about consciousness here.
But very clear evidence of algorithmic boundary recognition under integrity pressure.
Why on a phone?
Because local sovereignty matters.
This runs entirely offline, on commodity hardware, without cloud inference, APIs, or hidden system prompts.
I’m curious how others in this community would interpret refusal, boundary signaling, and integrity constraints in local models. | 2026-02-01T15:03:33 | NeoLogic_Dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qt0qvj | false | null | t3_1qt0qvj | /r/LocalLLaMA/comments/1qt0qvj/running_a_sha256_hashchained_multiagent_llm/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'wi8v8rlhewgg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?width=108&crop=smart&auto=webp&s=50743e744cec13b657e3fe9604fdba54da99a839', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?width=216&crop=smart&auto=webp&s=704027aded6eb27fd1865918575d7be19ed9f24d', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?width=320&crop=smart&auto=webp&s=be869ec32f52c878ea014c8ecfe6c5d2dd24e8fb', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?width=640&crop=smart&auto=webp&s=4e2cf7d113222f8a0be057a0b4acda289a8f83a2', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?width=960&crop=smart&auto=webp&s=443252f6eb4797cfaa70a6f3e1c10094ca6a76f9', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?width=1080&crop=smart&auto=webp&s=616477b31b1736f542956df06d34e8091bcea9be', 'width': 1080}], 'source': {'height': 2712, 'url': 'https://preview.redd.it/wi8v8rlhewgg1.jpeg?auto=webp&s=98dc44a36254a098019befc6ad6a2a63d63a0ef1', 'width': 1220}, 'variants': {}}]} | |
installing OpenClaw (formerly ClawdBot) locally on Windows | 0 | Just made a tutorial on installing OpenClaw (formerly ClawdBot) locally on Windows instead of paying for VPS. Saved me $15/month and works perfectly with Docker.
https://www.youtube.com/watch?v=gIDz_fXnZfU
TL;DW: Install Docker + WSL → Clone OpenClaw → Run setup → Fix pending.json pairing issue → Done
Anyone else ditching VPS for local installs?
| 2026-02-01T15:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qt0onq/installing_openclaw_formerly_clawdbot_locally_on/ | elsaka0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt0onq | false | null | t3_1qt0onq | /r/LocalLLaMA/comments/1qt0onq/installing_openclaw_formerly_clawdbot_locally_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'QYVAXuBm56-NIvqcjR3mNXTRMf9EP1m0JlN_9xRrj0A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QYVAXuBm56-NIvqcjR3mNXTRMf9EP1m0JlN_9xRrj0A.jpeg?width=108&crop=smart&auto=webp&s=504990aa176a77c024524f7c45065ce6571c6013', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QYVAXuBm56-NIvqcjR3mNXTRMf9EP1m0JlN_9xRrj0A.jpeg?width=216&crop=smart&auto=webp&s=1703fa7df1b394f8889c9e217752e2f20b7cbd1a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QYVAXuBm56-NIvqcjR3mNXTRMf9EP1m0JlN_9xRrj0A.jpeg?width=320&crop=smart&auto=webp&s=43195c8b10672eadeee96078e6cf3c54738167c3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QYVAXuBm56-NIvqcjR3mNXTRMf9EP1m0JlN_9xRrj0A.jpeg?auto=webp&s=9d1284f85807ed85d9e564a31a78dd360904fe46', 'width': 480}, 'variants': {}}]} |
Newbie Looking for Advice on AI Credits for VSCode | 1 | I’m new to coding and was using VSCode with Codex OpenAI, and it worked well for me until my credits ran out fast. I then tried using Gemini with VSCode, but the credits disappeared quickly there too. I also tried Qwen, and the same thing happened. I haven’t tried Deepseek yet, but I don’t want to waste time if the credits will run out quickly there as well.
Does anyone know how to make credits last longer or if there are free models (like Qwen or Deepseek) that work well without burning through credits? Any advice would be appreciated! | 2026-02-01T14:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qt08ra/newbie_looking_for_advice_on_ai_credits_for_vscode/ | Aggressive-Coffee365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qt08ra | false | null | t3_1qt08ra | /r/LocalLLaMA/comments/1qt08ra/newbie_looking_for_advice_on_ai_credits_for_vscode/ | false | false | self | 1 | null |
OpenClaw For data scientist | 0 | I built an open-source tool that works like OpenClaw (i.e., web searches all the necessary content in the background and provides you with data). It supports Ollama. You can give it a try—hehe, and maybe give me a little star as well! | 2026-02-01T14:40:13 | https://github.com/JasonHonKL/PardusClawer | jasonhon2013 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qt05j3 | false | null | t3_1qt05j3 | /r/LocalLLaMA/comments/1qt05j3/openclaw_for_data_scientist/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?width=108&crop=smart&auto=webp&s=c162f545208496807631ec9c944a996d439891dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?width=216&crop=smart&auto=webp&s=8fff2ae86467d5e1f9ceb35f421aa21446f9417c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?width=320&crop=smart&auto=webp&s=9f3756f99205ccb3f74197f9e42223618086c9ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?width=640&crop=smart&auto=webp&s=4f1f98cfb816d1f258b4f3b19195340579de5e4d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?width=960&crop=smart&auto=webp&s=c4858d43d4757a6b0f783078cb72b5000fae99cf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?width=1080&crop=smart&auto=webp&s=03422d6a88bdf0dc796ec3a0915d6646750fa09b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6n0lheaeZysEyY3dN5Kt5g7XRf3lYCD1kKO5LEgkWkA.png?auto=webp&s=05605324d31005c3f65e9a2cf73c9a186d560917', 'width': 1200}, 'variants': {}}]} |
How to get rid of the Nano Banana watermark! | 0 | Step 1: crop it.
That's it! | 2026-02-01T14:30:25 | https://www.reddit.com/gallery/1qszwpj | SVG-CARLOS | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qszwpj | false | null | t3_1qszwpj | /r/LocalLLaMA/comments/1qszwpj/how_to_get_rid_of_the_nano_banana_watermark/ | false | false | 0 | null | |
Looking for Help: Complex Localized Voice Agents | 1 | I’m doing a lot of work with multi agent multi context voice right now on localized systems. With everyone and their brother using third party apps and API’s I wanted to build a clean framework to make localized multi agent multi context voice easy for people to self host. As I’m sure you can imagine if you do this kind of work, I don’t bump into many people who are working on this in my normal life and circle of connections. If anyone wants to work on this, I’m happy to pay and share code so that everyone can benefit from improvements in local voice. Just wanted to put a flag up in case any of you geeks are doing what I’m doing 🧙💻🙋♂️ | 2026-02-01T14:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qszv77/looking_for_help_complex_localized_voice_agents/ | Signal_Ad657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qszv77 | false | null | t3_1qszv77 | /r/LocalLLaMA/comments/1qszv77/looking_for_help_complex_localized_voice_agents/ | false | false | self | 1 | null |
What are the best collection of small models to run on 8gb ram? | 6 | Preferably different models for different use cases.
Coding (python, Java, html, js, css)
Math
Language (translation / learning)
Emotional support / therapy- like
Conversational
General knowledge
Instruction following
Image analysis/ vision
Creative writing / world building
RAG
Thanks in advance! | 2026-02-01T14:22:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qszphr/what_are_the_best_collection_of_small_models_to/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qszphr | false | null | t3_1qszphr | /r/LocalLLaMA/comments/1qszphr/what_are_the_best_collection_of_small_models_to/ | false | false | self | 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.