title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to get my Local LLM to work better with OpenCode (Ez button appreciated :) ) | 2 | TLDR: how do I get OpenCode to talk better to my local LLM (Qwen-3b-32b on Ollama)
I have a gaming rig that I don't use so today I created an Ollama and served it on my local network for my laptop to use, THEN hit that api call and man was that cool, until I realized that [OpenCode](https://www.linkedin.com/company/opencode-co/) (at least my version) is not optimized. I feel like their Zen platform is probably some middleware or configuration that helps signficantly with how the inference is being served up. Have no clue, anybody further down the LocalLLM rabbit hole and created or used some other tools? | 2025-12-23T20:31:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pu4a1u/how_to_get_my_local_llm_to_work_better_with/ | elrosegod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu4a1u | false | null | t3_1pu4a1u | /r/LocalLLaMA/comments/1pu4a1u/how_to_get_my_local_llm_to_work_better_with/ | false | false | self | 2 | null |
New Project | 0 |
🔗 [https://github.com/Januka19/whisper-local-transcriber](https://github.com/Januka19/whisper-local-transcriber?utm_source=chatgpt.com) | 2025-12-23T20:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pu494f/new_project/ | Januka208338475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu494f | false | null | t3_1pu494f | /r/LocalLLaMA/comments/1pu494f/new_project/ | false | false | self | 0 | null |
Teaching AI Agents Like Students (Blog + Open source tool) | 2 | **TL;DR:**
Vertical AI agents often struggle because domain knowledge is tacit and hard to encode via static system prompts or raw document retrieval.
What if we instead treat agents like students: human experts teach them through iterative, interactive chats, while the agent distills rules, definitions, and heuristics into a continuously improving knowledge base.
I built an open-source tool [Socratic ](https://github.com/kevins981/Socratic)to test this idea and show concrete accuracy improvements.
Full blog post: [https://kevins981.github.io/blogs/teachagent\_part1.html](https://kevins981.github.io/blogs/teachagent_part1.html)
Github repo (with local model support of course): [https://github.com/kevins981/Socratic](https://github.com/kevins981/Socratic)
3-min demo: [https://youtu.be/XbFG7U0fpSU?si=6yuMu5a2TW1oToEQ](https://youtu.be/XbFG7U0fpSU?si=6yuMu5a2TW1oToEQ)
Any feedback is appreciated!
Thanks! | 2025-12-23T20:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pu452k/teaching_ai_agents_like_students_blog_open_source/ | Unable-Living-3506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu452k | false | null | t3_1pu452k | /r/LocalLLaMA/comments/1pu452k/teaching_ai_agents_like_students_blog_open_source/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?width=108&crop=smart&auto=webp&s=e7e53c9c4156b3d32b541d78a2a3dabbf42ce974', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?width=216&crop=smart&auto=webp&s=612bdea9aaea98a2dfe73e97f1092f164b93c0db', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?width=320&crop=smart&auto=webp&s=f041d30dbac3981cfe5b69928f03436259f62fd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?width=640&crop=smart&auto=webp&s=35df9ba8f149f0f772ebc5cd4a0849c6f4fdcb78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?width=960&crop=smart&auto=webp&s=06ac94ba9eb4187d75ac5053136cba47892f2476', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?width=1080&crop=smart&auto=webp&s=4f027ffe189051ba90f0655f47cb75d84f9e8a6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rj9-Mk1U12zh84O9uzwKOH6-ZjoDbBS1Ab3AQBy4ILI.png?auto=webp&s=28efb89758ac0ecda3601797a8ada385b8b4163c', 'width': 1200}, 'variants': {}}]} |
🤯Weird - India’s Top AI Talent Celebrating New Year Together 🎉 | 0 | If you’re in India 🇮🇳, and don’t want to miss the craziest AI nerds party of the year then do check this:
https://www.linkedin.com/posts/india-top-1\\\_newyear-ai-talent-activity-7407772844677427200-SYsF | 2025-12-23T20:25:11 | Ambitious-End1261 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pu450p | false | null | t3_1pu450p | /r/LocalLLaMA/comments/1pu450p/weird_indias_top_ai_talent_celebrating_new_year/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '8c2rb4qej09g1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/8c2rb4qej09g1.jpeg?width=108&crop=smart&auto=webp&s=74d5201e9a464a7904cbd4cc253b619ae789eea8', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/8c2rb4qej09g1.jpeg?width=216&crop=smart&auto=webp&s=3cffbe26a5daa8131b36a36464a36b96093da979', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/8c2rb4qej09g1.jpeg?width=320&crop=smart&auto=webp&s=68f13521497c829a8cff00a05d8f6aeb39b7c2bd', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/8c2rb4qej09g1.jpeg?width=640&crop=smart&auto=webp&s=bd76d90a2cec510c26c659cb14285019ef816a52', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/8c2rb4qej09g1.jpeg?width=960&crop=smart&auto=webp&s=386dab2d36675531d3940ea28a95b70faadf820f', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/8c2rb4qej09g1.jpeg?auto=webp&s=ea4dafdd3d948cb3627be93ee8a31b5e90400402', 'width': 1024}, 'variants': {}}]} | |
what personal tasks do you actually use fine-tuning for? | 2 | i have an m3 ultra with 96GB and keep reading about fine-tuning local models, but i can't figure out where it would actually help in my daily life
i already pay for Claude and it handles most complex tasks fine. i get that fine-tuning won't make a 7B model smarter, because it's more about format, style, and specific patterns
the only clear win i see so far is saving money on high-volume repetitive tasks where you'd burn through API costs. makes sense for corporate stuff like classifying thousands of tickets daily
but for personal use... actually where did fine-tuning actually work better than just a well-crafted prompt or custom instructions in popular models?
not "theoretically you could.." I'm looking for a real examples where you tried both approaches and fine-tuning won. what was the task, and why couldn't a good prompt do the same thing? thanks a lot | 2025-12-23T20:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pu407g/what_personal_tasks_do_you_actually_use/ | Appropriate_Car_5599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu407g | false | null | t3_1pu407g | /r/LocalLLaMA/comments/1pu407g/what_personal_tasks_do_you_actually_use/ | false | false | self | 2 | null |
Merry Christmas from the dual 3090 Club! Favorite llama.cpp Parameters and Model Swap | 1 | [removed] | 2025-12-23T20:16:50 | https://www.reddit.com/gallery/1pu3xuw | RedKnightRG | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pu3xuw | false | null | t3_1pu3xuw | /r/LocalLLaMA/comments/1pu3xuw/merry_christmas_from_the_dual_3090_club_favorite/ | false | false | 1 | null | |
nvidia p2p - not possible on all mobos? | 4 | I got this fine specimen (Asrock ROMED8-2T) for the 7 x PCIE 4.0 slots. I didn't realise it would be impossible to enable p2p because each slot sits behind it's own root complex?
Is there any alternative to buying yet more hardware to get around this? | 2025-12-23T20:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pu3t9y/nvidia_p2p_not_possible_on_all_mobos/ | Aggressive-Bother470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu3t9y | false | null | t3_1pu3t9y | /r/LocalLLaMA/comments/1pu3t9y/nvidia_p2p_not_possible_on_all_mobos/ | false | false | self | 4 | null |
Merry Christmas from the Dual 3090 Club; Llama.ccp Parameters and Favorite Model Swap | 1 | [removed] | 2025-12-23T20:10:57 | https://www.reddit.com/gallery/1pu3smx | RedKnightRG | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pu3smx | false | null | t3_1pu3smx | /r/LocalLLaMA/comments/1pu3smx/merry_christmas_from_the_dual_3090_club_llamaccp/ | false | false | 1 | null | |
Best small model for code review | 0 | Looking for folks opinions on actual usage . What’s the best 14b or less models that can do agentic code review. Think stuff like explore a codebase and point out potential bugs. I’ve been using qwen30b which is great but q4 seems a bit dumb and I’m trying to deploy on TPUs which only have 16gb vram. I’ve had luck with qwen2.5 14b in the past but wondering what everyone’s favorite modern LLM is at this size | 2025-12-23T19:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pu39tk/best_small_model_for_code_review/ | thepetek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu39tk | false | null | t3_1pu39tk | /r/LocalLLaMA/comments/1pu39tk/best_small_model_for_code_review/ | false | false | self | 0 | null |
WaveHelix - Would love some input on side project ive been working on | 0 | Quick Context: I am a backend engineer working mostly on spring microservices and the like so I am very out of my depth in ML research. Ive played around with training local models and building some toy transformers but if im being honest that is the extent of my ML "background"
Now ive been experimenting with a few different ideas which I'd love some input on. Right now im calling it WaveHelix and the idea is to see if a system can learn a simple "world model" without gradient descent. I have made a toy physics environment and instead of backprop, I run several candidate rollouts, score each one using (1) prediction error (RMSE) and (2) an internal ‘energy’ term that penalizes jittery/unstable motion(and a few other things) then I blend the main model toward the best rollout. I also add a small ‘curl’ term that encourages exploration/avoids getting stuck in the same attachment pattern.
Curl: a small “rotational push” term I add to updates so the system doesn’t just settle straight into a local minimum, inspired by electromagnetism where curl describes circulating fields (a built-in tendency to keep motion/flow rather than freezing).
Im not trying to beat transformers or anything as this is kinda a seperate beast, but what I am trying to do is sanity check if my idea is coherent, what is is similiar to, and overall input/what tests would convince you it is learning vs gaming metrics
What the system does
* Environment:
* Toy task: bouncing balls with gravity + elastic collisions.
* Input, initial state at `t=0` for each ball `(x, y, vx, vy)`
* Output, predicted rollout for horizon `H`: `(x, y, vx, vy)` for `t=1..H`
* So predict the trajectory from initial state
* Internal representation:
* Rungs: N memory cells (latent slots), Each has:
* concrete (fast) latent - is a vector that tracks immediate observable state
* abstract (slow) latent - is a vector that tracks longer-term/context features
* Spirals: K routing/track vectors that decide which rung(s) to read/write each step (idea here is kinda object permanence)
* WaveBank: One global context vector pooled from all rungs (using Fourier-ish pooling)
* Phase-rolled spectral: Phase shifted Fourier features used to inject time structure into that global context(WaveBank)
* Learning:
* No backprop, each episode it does "sample -> score -> select -> blend:
* Roll out ground truth from the simulator (physics sim)
* Run M candidate model rollouts (with some randomness)
* Score each candidate
* Pick the best
* If it beats the baseline, then blend the world model (global persistant one) slightly towards that candidate
* Think closer to CEM (Cross-entropy method), ES (evolution strategies), or HillClimb rather than SGD
* Basic pseudo-code:
* Episodic “sample → score → keep best → optionally blend” loop:
​
for episode in 1..E:
# (optional) get ground-truth rollout from env
gt0, gt_traj = env.reset_and_rollout(H)
best_candidate = None
best_score = +INF
for r in 1..NUM_ROLLOUTS:
thought = copy(world_model)
thought.inject_small_randomness()
thought.embed_input(gt0) # encode initial state into memory
trace = thought.run_for(H) # produces predicted trajectory + internal history
score = SCORE(trace) # e.g., energy-only, or energy + prediction error
if score < best_score:
best_score = score
best_candidate = thought
# Accept/blend rule (describe verbally; keep exact gate/eta private)
if ACCEPT(best_candidate, world_model):
world_model.blend_toward(best_candidate)
* One-turn dynamics: wavebank → attachments → local updates → routing:
​
function STEP(state, teacher_next=None):
# 1) global field from current memory (wavebank)
wave = BUILD_WAVEBANK(rungs, spirals)
state.wavebank = wave
# 2) bucket spirals by current attachment (routing target)
attached = group_by(spirals, key = spiral.current_rung_id)
# 3) update each rung, optionally influenced by attached spiral(s)
for rung in rungs:
prev = prev_rungs[rung.id] # snapshot from previous turn
target = None
if teacher_next exists:
target = teacher_next.rungs[rung.id].concrete # “next-step” supervision
for spiral in attached[rung.id]:
rung = UPDATE_RUNG_WITH_SPIRAL(rung, prev, spiral, wave, target)
spiral.next_rung_id = CHOOSE_NEXT_RUNG(spiral, rungs, wave)
if no spiral attached:
rung = UPDATE_RUNG(rung, prev, wave, target)
# 4) commit routing moves
for spiral in spirals:
spiral.current_rung_id = spiral.next_rung_id
return state
* Routing + “curl”: rotate-for-search then softmax pick:
​
function CHOOSE_NEXT_RUNG(spiral, rungs, wavebank):
# derive an axis from a wave phase for this turn
axis = wavebank[turn % num_phases]
# "curl": rotate spiral identity in the plane perpendicular to axis
# amount grows when stuck (wrongStreak) and with arousal
spiral_vec = ROTATE_AROUND_AXIS(
v = spiral.vec,
axis = axis,
amount = f(spiral.wrongStreak, state.arousal)
)
# score each rung by similarity to (possibly rotated) spiral vector
scores[i] = SIM(spiral_vec, rungs[i].key_vector)
# sample next rung (softmax + temperature driven by arousal/stuckness)
probs = SOFTMAX(scores / TEMP(spiral.wrongStreak, state.arousal))
next_id = SAMPLE(probs)
# update “stuckness” based on whether move improved attachment
spiral.wrongStreak = UPDATE_STUCKNESS(spiral, next_id, scores)
# (optional) softly commit rotated vector back into identity when stuck
spiral.vec = MAYBE_COMMIT_ROTATION(spiral.vec, spiral_vec, spiral.wrongStreak)
return next_id
* Energy signal: “normalize by running scale, squash, EMA”:
​
function UPDATE_ENERGY(raw_components):
# raw_components might include: physics_smoothness, strand_change, etc.
for each component c:
scale[c] = EMA(scale[c], max(raw[c], 0)) # running magnitude estimate
norm[c] = (raw[c]/scale[c]) / (1 + raw[c]/scale[c]) # squash to [0,1]
turn_total = sum(norm[c] for c)
# keep energy as EMA of normalized per-turn quantities
energy_total = EMA(energy_total, turn_total)
for c: energy[c] = EMA(energy[c], norm[c]) | 2025-12-23T19:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pu30gr/wavehelix_would_love_some_input_on_side_project/ | CelebrationMinimum50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu30gr | false | null | t3_1pu30gr | /r/LocalLLaMA/comments/1pu30gr/wavehelix_would_love_some_input_on_side_project/ | false | false | self | 0 | null |
New Update - Mistral Vibe v1.3.0 | 99 | A new [**Vibe**](https://github.com/mistralai/mistral-vibe) update is here! We’re keeping the momentum going by including [Agent Skills](https://agentskills.io/home) in this latest Vibe update. Agent Skills are **collections of instructions, scripts, and resources that agents can discover and use to perform tasks** more accurately and efficiently.
# Changelog
* Agent Skills Support
* Native Terminal Theme Support
* Reasoning Models Support
* Multiple Bug Fixes
\-# Learn more about the changes [here](https://github.com/mistralai/mistral-vibe/blob/main/CHANGELOG.md#130---2025-12-23)
**Happy shipping - and happy holidays!**
\-> `uv tool install mistral-vibe` | 2025-12-23T19:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pu2bwy/new_update_mistral_vibe_v130/ | Nefhis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu2bwy | false | null | t3_1pu2bwy | /r/LocalLLaMA/comments/1pu2bwy/new_update_mistral_vibe_v130/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': 'soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?width=108&crop=smart&auto=webp&s=3893f6dfd40e153f2c68b4fc57d9784f65f8890a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?width=216&crop=smart&auto=webp&s=e1931f9446b627c2b29fae228ce5771d5a6e83e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?width=320&crop=smart&auto=webp&s=2bdd158d07ea8d81a3de7e04274a1d8fc9fc0b74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?width=640&crop=smart&auto=webp&s=dfd484787db13f55a5571ac8f301a9957d88a3fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?width=960&crop=smart&auto=webp&s=030cf79452788e72c0cb572d85157443ce25ffc6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?width=1080&crop=smart&auto=webp&s=a0ef3fca304d2396d5009965eeabebd9d28d7921', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/soghN5GoZXbAPaCVmdJcR4K35i8G8Kd15dtBZXruNdI.png?auto=webp&s=1293f47fca21c7d9ae71cdaaad6ac4efa2fea5f6', 'width': 1200}, 'variants': {}}]} |
Releasing NegotiateBench: a benchmark where models negotiate against each other | 6 | The goal is to identify which LLMs perform best in environments where no correct solution can be known in advance (ex: during training time).
Code: https://github.com/Mihaiii/NegotiateBench
Huggingface Space: https://mihaiii-negotiatebench.hf.space/ | 2025-12-23T19:09:25 | https://mihaiii-negotiatebench.hf.space | Either-Job-341 | mihaiii-negotiatebench.hf.space | 1970-01-01T00:00:00 | 0 | {} | 1pu2ajz | false | null | t3_1pu2ajz | /r/LocalLLaMA/comments/1pu2ajz/releasing_negotiatebench_a_benchmark_where_models/ | false | false | default | 6 | null |
Ollama not outputing for Qwen3 80B Next Instruct, but works for Thinking model | 1 | I have a weird issue where Ollama does not give me any output for Gwen3 Next 80B Instruct though it gives me token results. I see the same thing running in terminal. When I pull up the log I don't see anything useful. Anyone come accross something like this? Everythign is on the latest version.
https://preview.redd.it/27ooi0og209g1.png?width=1246&format=png&auto=webp&s=55579ada7461fa7258cc1c6a908111b1fb957005
The log shows absolutely nothing useful
[Running from Open WebUI](https://preview.redd.it/ts6lb8t7309g1.png?width=1341&format=png&auto=webp&s=84785ddb224466e38803a10a37f8d05bab3c08d7)
[Running locally via terminal](https://preview.redd.it/j9ujcugk309g1.png?width=1351&format=png&auto=webp&s=2b31d610451aa2550cba448960ec82e2c6b09c22)
| 2025-12-23T18:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pu1zvx/ollama_not_outputing_for_qwen3_80b_next_instruct/ | vulcan4d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu1zvx | false | null | t3_1pu1zvx | /r/LocalLLaMA/comments/1pu1zvx/ollama_not_outputing_for_qwen3_80b_next_instruct/ | false | false | 1 | null | |
Whats the best open weight model for orchestration? | 0 | Hey everyone,
I’m exploring AI orchestration systems and I’m curious about what the community would use.
What I was thinking is instead of relying on a single model for everything, I’m thinking in a multi agent system orchestrator: one central model coordinates multiple specialized agents like a developer agent, code review agent, web browsing agent, deep research agent, vision agent, etc.
Does anyone know good open-weight models that work well as the central orchestrator? I've been thinking about gpt-oss models since they're good at reasoning and instruction following.
Would love to hear your experiences, recommendations :D | 2025-12-23T18:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pu1whs/whats_the_best_open_weight_model_for_orchestration/ | cride20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu1whs | false | null | t3_1pu1whs | /r/LocalLLaMA/comments/1pu1whs/whats_the_best_open_weight_model_for_orchestration/ | false | false | self | 0 | null |
Saw this on local marketplace, must be from a fellow r/LocalLLaMA here | 180 | 2025-12-23T18:51:52 | bobaburger | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pu1uq6 | false | null | t3_1pu1uq6 | /r/LocalLLaMA/comments/1pu1uq6/saw_this_on_local_marketplace_must_be_from_a/ | false | false | default | 180 | {'enabled': True, 'images': [{'id': 'rd8mxp4l209g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?width=108&crop=smart&auto=webp&s=e76e7a8472599b93fcb45371d68721a83934d09d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?width=216&crop=smart&auto=webp&s=33cbf220c0ce1786679eeddf6087db8539016988', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?width=320&crop=smart&auto=webp&s=d097be128dada7e3062b141efb84a81ca9e5fa7c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?width=640&crop=smart&auto=webp&s=3bc74b68c9ffc26cb488794a081d8077fa8ae663', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?width=960&crop=smart&auto=webp&s=a15d63dc389d5ecbd6f7a41682e295f84278bfe5', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?width=1080&crop=smart&auto=webp&s=8b5feeb787153927a54969f090c14a4813ba000f', 'width': 1080}], 'source': {'height': 2868, 'url': 'https://preview.redd.it/rd8mxp4l209g1.png?auto=webp&s=23d56f6b70ff9e4b17ccda9123a27fa97dbcc4a3', 'width': 1320}, 'variants': {}}]} | ||
Everyone’s using vectors and graphs for AI memory. We discovered Agentic Search. | 0 | When we first started building with LLMs, the gap was obvious: The models could reason well in the moment, but as soon as the repo got large or the session got long, things started to fall apart.
An agent would edit the wrong file, pull in a “similar” function instead of the exact one or slow to a crawl trying to semantically search the entire codebase.
It wasn’t bad reasoning. It was bad retrieval.
Here are the reasons why embedding-based retrieval breaks down for code:
**- Precision:** Cosine similarity retrieves “related,” not “correct". Code is brittle - similar is often worse than irrelevant. One wrong snippet in the context window quietly poisons the rest of the reasoning and leads to cascading errors.
**- Latency**: Once repos get large, LLM-led semantic search can take minutes rather than milliseconds. That’s hard to tolerate inside agent loops.
**- Mutation:** Codebases change constantly - refactors, renames, deletions, generated files, even embedding model upgrades. Embeddings drift, and reindexing becomes mandatory and expensive.
And then we hit an uncomfortable realization:
For *coding agents*, semantic similarity might be the wrong retrieval primitive entirely.
Vectors are great at recall.
But programming tasks usually need **precision**.
Retrieving something “related” is often worse than retrieving nothing. One wrong snippet in the context window quietly poisons the rest of the reasoning.
That led us to move away from flat embedding indexes and toward two ideas: **Context Tree** and **Agentic Search**.
Instead of storing memory as flat chunks, context is organized hierarchically - closer to how code and decisions are structured:
.brv/context-tree/
├── code_style/
├── design/
├── structure/
└── snapshots/
And instead of a single nearest-neighbor lookup, retrieval becomes an agent-driven process:
* scoped search
* pattern / regex matching
* filtered full-text lookup
* structured traversal across related nodes
* tool chaining based on the task
No global similarity search, embedding drift or mandatory reindexing.
The effect was immediate:
Agents became faster, because retrieval was mostly filesystem-level work. Long-running sessions stayed stable. Most importantly, precision improved, agents pulled the *exact* context the task depended on, not semantically adjacent noise.
This is the direction we’ve been exploring at ByteRover: structuring context with a Context Tree and letting agents retrieve it through tool-driven navigation rather than pure semantic similarity.
I would love to know your thoughts about our approach!
| 2025-12-23T18:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pu0zeb/everyones_using_vectors_and_graphs_for_ai_memory/ | Julianna_Faddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu0zeb | false | null | t3_1pu0zeb | /r/LocalLLaMA/comments/1pu0zeb/everyones_using_vectors_and_graphs_for_ai_memory/ | false | false | self | 0 | null |
My 2x5090 training benchmarks | 4 | Wanted to share my results using the below benchmark. These seem surprisingly hard to come by, so I'm hoping others can run this and share what your results are. To limit power to the cards I ran: `sudo nvidia-smi -pl <whatever watts you want>`
Note this is a rough benchmark but from the results from the guys who made it, it does seem to generalize pretty well.
[https://github.com/aime-team/pytorch-benchmarks#](https://github.com/aime-team/pytorch-benchmarks#)
git clone https://github.com/aime-team/pytorch-benchmarks.git
python main.py -amp -ne 1
My results:
9960X w/ Linux 6.17 + PyTorch 2.9 + Python 3.13:
Full power / limited to 400W
1 GPU: 52s / 55s
2 GPU: 31s / 32s | 2025-12-23T18:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pu0xnf/my_2x5090_training_benchmarks/ | john0201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu0xnf | false | null | t3_1pu0xnf | /r/LocalLLaMA/comments/1pu0xnf/my_2x5090_training_benchmarks/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?width=108&crop=smart&auto=webp&s=aca96d4b99aa402c21b59293bd56f4df1ed1a607', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?width=216&crop=smart&auto=webp&s=b5a9c5fb3e1ddd249d3f8ec6f0ea6895adf11abf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?width=320&crop=smart&auto=webp&s=9403ad059405a57731beeb5b0bdb0d15ea39b99a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?width=640&crop=smart&auto=webp&s=fd5d873ab16222faa828b6bbe53202453231b756', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?width=960&crop=smart&auto=webp&s=736df7b5f3c19e422eccb6148f5c2d5c2e44e02a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?width=1080&crop=smart&auto=webp&s=eac85ec993638cdbd1548a579253fbed2999df08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-5SF42qHjp5Lohol_KrdEHEqz0McSDj8b3OokB4klFY.png?auto=webp&s=ef1d3670c6711fb7e034efd8c37512386f42a0cb', 'width': 1200}, 'variants': {}}]} |
WaveHelix: a weight-free dynamical learner (spirals + wavebank + energy) | toy env results + request for critique / prior art | 0 | Hey so let me preface this by saying im new to the field...my background is in backend engineering with spirng microservices and the like, but I’m super interested in learnign how LLMs work and have built some minor toy transformers. Im now exploring an alternative learning setup that avoids gradient descent and instead tries to “learn” by minimizing an internal energy signal inside its own dynamics. I just have this beleif that a model needs to learn a world view/understanding first and language last.
# TL;DR
* Model state is a set of **rungs** (concrete vector, and an abstract vector) + moving **spirals** (identity carriers).
* Each turn, rungs produce a global **wavebank** (spectral pooled field), spirals “move/attach,” and updates happen locally.
* Training uses an **energy + teacher-RMSE** signal (toy env) and a simple accept/blend step.
* I’m not claiming this beats transformers; I’m trying to understand whether the dynamics can learn stable “physics-ish” structure at all.
# Core loop (high-level)
* Pool rungs → compute wavebank (phase-rolled spectral vector)
* Spirals attach to rungs, influence updates, then choose next rung (softmax + exploration)
* Energy is a running measure of “smoothness / stability” (and optionally teacher consistency)
* Episodic training picks candidate rollouts and blends improvements into the world model
(If useful I can post a diagram / pseudocode, but I’m trying not to dump every implementation detail in the initial post.)
# Toy experiment
* Environment: bouncing balls with gravity + elastic collisions (low dimensional physics)
* Metric 1: teacher-RMSE (supervision from env rollout / teacher snapshots)
* Metric 2: internal energy (smoothness proxy / second-difference style)
* Current observation: learning is noisy and I’m debugging evaluation comparability + normalization.
# What I want feedback on (specific)
1. **Closest prior art?** What should I read that’s actually similar (dynamical systems learning, energy-based models, predictive coding, etc.)?
2. **Failure modes** you’d expect from this kind of discrete “attach/move” mechanism?
3. **Baselines** for the toy env that would make the comparison fair (even simple AR / small MLP / Kalman)?
4. If you saw energy go down but RMSE not improve (or vice versa), what would you test next?
5. What result would convince you this isn’t just “metric hacking”? | 2025-12-23T18:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pu0uc3/wavehelix_a_weightfree_dynamical_learner_spirals/ | CelebrationMinimum50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu0uc3 | false | null | t3_1pu0uc3 | /r/LocalLLaMA/comments/1pu0uc3/wavehelix_a_weightfree_dynamical_learner_spirals/ | false | false | self | 0 | null |
Seedance-1.5 Pro now Available for APIs (Lip Sync Test) - Will Smith Eating Spaghetti | 22 | Just released Seedance-1.5 Pro for Public APIs. This update focuses primarily on lip synchronization and facial micro-expressions. Created this video using Prompt : "Will Smith eating spaghetti." using Higgsfield
| 2025-12-23T18:04:02 | https://v.redd.it/xiw0xde4uz8g1 | Educational-Pound269 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pu0nuo | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xiw0xde4uz8g1/DASHPlaylist.mpd?a=1769105059%2COWMwZmI5OWEwYzM0N2I0M2Q0OTE5OGM0NzA2YmViZjZjMDgwZTJjZmNlYjVmZDI4NjVkMDdlZGJiMDhmMGYwNg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/xiw0xde4uz8g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/xiw0xde4uz8g1/HLSPlaylist.m3u8?a=1769105059%2CNjYzNjgxNDc2MWMwMmVjODFlNDg2NGZlYzY3ODNjNGRiODg2NWNmNTJiNzQyNDJlNDdjNWFhYmUwN2Y3ODhhNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xiw0xde4uz8g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1pu0nuo | /r/LocalLLaMA/comments/1pu0nuo/seedance15_pro_now_available_for_apis_lip_sync/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?width=108&crop=smart&format=pjpg&auto=webp&s=aaaa56f7bc6449414f899ec2316569c3ec63c49f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?width=216&crop=smart&format=pjpg&auto=webp&s=600f30262f02acbb431117cd51b728d47d1be6ec', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?width=320&crop=smart&format=pjpg&auto=webp&s=58ed3a32964be061cabcaae7e8e9b4c5059daab9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?width=640&crop=smart&format=pjpg&auto=webp&s=d5e8ace456bbed003c9a98e9b3055a581de0adaa', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?width=960&crop=smart&format=pjpg&auto=webp&s=dcb3b8e14f445facbc8f43fc699e7cd6bfe0dd4e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5013b7aabe6f5f8cda86cdf9e8f6fec70b236896', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z290OXhnZjR1ejhnMXoNBk7Z2yIFZ7Ejh3Sd6n-6-3FCCF2620soOqLbMyFd.png?format=pjpg&auto=webp&s=ae6c09c96400a973be3407be2c785a75a72b6de2', 'width': 1280}, 'variants': {}}]} | |
Do different AI models “think” differently when given the same prompt? | 1 | I've been experimenting with running the same prompt through different Al tools just to see how the reasoning paths vary. Even when the final answer looks similar, the way ideas are ordered or emphasized can feel noticeably different.
Out of curiosity, I generated one version using [AdpexWan 2.6](https://www.reddit.com/r/LocalLLaMA/s/LgPVVsHQL0)
2
and compared it with outputs from other models. The content here comes from that experiment. What stood out wasn't accuracy or style, but how the model chose to frame the problem and which assumptions it surfaced first.
For people who test multiple models:
- Do you notice consistent "personalities" or reasoning patterns?
- Do some models explore more alternatives while others converge quickly?
- Have you ever changed tools purely based on how they approach a problem? | 2025-12-23T17:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pu0bs7/do_different_ai_models_think_differently_when/ | Medical-Fennel-9842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu0bs7 | false | null | t3_1pu0bs7 | /r/LocalLLaMA/comments/1pu0bs7/do_different_ai_models_think_differently_when/ | false | false | self | 1 | null |
End to end encryption for AI chats built by Moxie Marlinspike: Confer | 0 | 2025-12-23T17:48:30 | https://confer.to/blog/2025/12/confessions-to-a-data-lake/ | Jordi_Mon_Companys | confer.to | 1970-01-01T00:00:00 | 0 | {} | 1pu096o | false | null | t3_1pu096o | /r/LocalLLaMA/comments/1pu096o/end_to_end_encryption_for_ai_chats_built_by_moxie/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?width=108&crop=smart&auto=webp&s=97c67e9cac1f9be1d418f3120ee36580c36f66ad', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?width=216&crop=smart&auto=webp&s=dad5a39e5fee7cef77daef63633ebe349f6c7e11', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?width=320&crop=smart&auto=webp&s=8eefbd1d9fb95ab75c266008f6f172397c7961c2', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?width=640&crop=smart&auto=webp&s=26ca2e2d62aca4c8522281ce81b94fb59e1a6dbb', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?width=960&crop=smart&auto=webp&s=5930713b7a3e62c02ff109d6fbf731c050a3094c', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?width=1080&crop=smart&auto=webp&s=8be472465c5e050c606f8daf500abb7fde4a13c6', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/5-20z4E8em8C2-JVQdVV0xMnZD2vbw7yN_SoyS38lLM.png?auto=webp&s=ed2d2f2cfc3310a187d8a735f15c638569b7ffbd', 'width': 1536}, 'variants': {}}]} | |
Do different AI models “think” differently when given the same prompt? | 0 | I’ve been experimenting with running the same prompt through different AI tools just to see how the reasoning paths vary. Even when the final answer looks similar, the way ideas are ordered or emphasized can feel noticeably different.
Out of curiosity, I generated one version using [Adpex Wan 2.6](https://www.adpexai.com/agent/wan-2-6)
and compared it with outputs from other models. The content here comes from that experiment. What stood out wasn’t accuracy or style, but how the model chose to frame the problem and which assumptions it surfaced first.
For people who test multiple models:
– Do you notice consistent “personalities” or reasoning patterns?
– Do some models explore more alternatives while others converge quickly?
– Have you ever changed tools purely based on how they approach a problem?
#AIModels #Prompting #LLMs #AdpexAI | 2025-12-23T17:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pu05xq/do_different_ai_models_think_differently_when/ | Quietly_here_28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu05xq | false | null | t3_1pu05xq | /r/LocalLLaMA/comments/1pu05xq/do_different_ai_models_think_differently_when/ | false | false | self | 0 | null |
OKAP (Open Key Access Protocol): like OAuth, but for API keys. | 4 | Problem: Every AI app wants you to paste your OpenAI/Anthropic key. Keys spread across dozens of apps with zero visibility, and you can only revoke by rotating the key itself.
Proposal: OKAP (Open Key Access Protocol) like OAuth, but for API keys.
How it works:
1. Keys stay in YOUR vault (self-host or hosted)
2. Apps request access via token (scoped to provider, models, expiry)
3. Vault proxies requests, apps never see your actual key
4. Revoke any app instantly without touching your master key
Not to be confused with LiteLLM/OpenRouter (those are proxies you pay for). OKAP is a protocol for user-owned key management - your keys, your vault, your control.
Working implementation:
- Hosted vault: https://vault.okap.dev
- Python SDK: pip install okap
- Spec: https://okap.dev
Looking for feedback. Would you use this for your AI tools? What's missing? | 2025-12-23T17:42:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pu03gc/okap_open_key_access_protocol_like_oauth_but_for/ | init0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pu03gc | false | null | t3_1pu03gc | /r/LocalLLaMA/comments/1pu03gc/okap_open_key_access_protocol_like_oauth_but_for/ | false | false | self | 4 | null |
4.6 air any day now. | 0 | 2025-12-23T17:28:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ptzqj9/46_air_any_day_now/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptzqj9 | false | null | t3_1ptzqj9 | /r/LocalLLaMA/comments/1ptzqj9/46_air_any_day_now/ | false | false | 0 | null | ||
Representation Engineering / activation steering: “prompting vs finetuning vs steering vectors” (practical notes + demo) | 30 | Been exploring Representation Engineering (RepE) / activation steering recently and it feels like a useful “third lever” between prompting and fine-tuning.
High-level framing (practitioner view):
* Prompting: fast to iterate, but persona/behavior can drift over long contexts.
* Fine-tuning: powerful but costly, and it can trade off generality if you push it too hard.
* Steering (activations): keep weights fixed and add a learned “direction” in hidden states at inference time (steering vectors), so you can nudge behavior without huge prompts or retraining.
The demo that made it click for me is “The Eiffel Tower Llama” (Hugging Face Space / walkthrough):
[https://www.youtube.com/watch?v=F2jd5WuT-zg](https://www.youtube.com/watch?v=F2jd5WuT-zg)
What’s interesting is how concrete the concept becomes: you find a direction corresponding to some concept (toy example: “Eiffel Tower”; more generally: honesty/helpfulness/positivity/etc.) and then add/subtract that vector during generation to shift outputs.
Questions for folks here who’ve implemented this in real setups:
* What’s your go-to method for discovering robust steering directions (contrastive pairs? probes? SAEs?) and which layers tend to be the most controllable?
* Have you seen steering reliably stack for multi-concept control, or does it quickly start to interfere (one concept breaking another / hurting instruction-following)?
* Any best practices for evaluating side effects (capability loss, new biases, safety regressions) beyond qualitative samples?
Would love pointers to good repos, eval recipes, or “gotchas” you’ve hit when moving from toy demos to actual workflows. | 2025-12-23T17:16:21 | AstraNorth | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ptzft4 | false | null | t3_1ptzft4 | /r/LocalLLaMA/comments/1ptzft4/representation_engineering_activation_steering/ | false | false | default | 30 | {'enabled': True, 'images': [{'id': 'pbgq9willz8g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?width=108&crop=smart&auto=webp&s=ec8cf4aeeac6936683005ae9eef33e6bd3913420', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?width=216&crop=smart&auto=webp&s=119d002f8fba213dc894a0cb68c31f72e0b23b5b', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?width=320&crop=smart&auto=webp&s=058cf4d46aae397c82a1eb29cc3eebac0b6972cd', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?width=640&crop=smart&auto=webp&s=57ce9f92e2504d518c58e6584a3a818c9e0c7865', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?width=960&crop=smart&auto=webp&s=df305a2287807e94473b7de38dedd5f3e8bdc576', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?width=1080&crop=smart&auto=webp&s=b3885763ea90ce1e211446404e0ba1e55961cc51', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/pbgq9willz8g1.png?auto=webp&s=c783af026cad90c07194307c1a3a26bae14203e7', 'width': 2752}, 'variants': {}}]} | |
Fine-tuning llms on dgx spark from nvidia webpage | 2 | https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/
Hi I'd like to discuss the numbers pertaining dgx spark performance from "How to Fine-Tune an LLM on Nvidia GPUs With Unsloth".
### Llama 3.3 70B
- Method: Qlora
- Backend: Pytorch
- Config:
- Sequence length: 2,048
- Batch size: 8
- Epoch: 1
- Steps: 125FP4
- Peak Tokens/ Sec: 5,079.04
If you assume training on 100M tokens then 100M/5079/3600 ~ 5.46 hours.
It doesn't seem to bad for what is worth, to have a mini machine that could fine tune a llama 3.3 70b in qlora. Is there a catch? Is this realistic number?
| 2025-12-23T17:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ptzb5y/finetuning_llms_on_dgx_spark_from_nvidia_webpage/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptzb5y | false | null | t3_1ptzb5y | /r/LocalLLaMA/comments/1ptzb5y/finetuning_llms_on_dgx_spark_from_nvidia_webpage/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?width=108&crop=smart&auto=webp&s=cbed0d19d62c592648484ca018e10e1892dda315', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?width=216&crop=smart&auto=webp&s=ae19f4e8c6716d1e01ab388baeb2ff88809e83d0', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?width=320&crop=smart&auto=webp&s=8f16c9b7c8088cca8e37753f2f980e44cf997306', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?width=640&crop=smart&auto=webp&s=62559e97c35b817d0cc2eaaa79e18761750192b7', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?width=960&crop=smart&auto=webp&s=59ab8ece873707a4d9f88827625b31f0666d3920', 'width': 960}, {'height': 573, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?width=1080&crop=smart&auto=webp&s=4c6e2dba987dbb429e3bf2a7a97a81c5e6a1acf0', 'width': 1080}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/uPoOwWXcR-Bb-K79pa6Sl4WtJjppjeJuuKURuHtyavw.jpeg?auto=webp&s=579e3a5e1543dc11b8a97b3081e8a20853e29ead', 'width': 1280}, 'variants': {}}]} |
AudioGhost AI: Run Meta's SAM-Audio on 4GB-6GB VRAM with a Windows One-Click Installer 👻🎵 | 116 | Hey everyone,
Meta's **SAM-Audio** is a breakthrough for object-oriented audio separation (e.g., "extract the violin from this busy track" using natural language), but the original repo has a massive VRAM footprint. Many users (including myself) experienced OOM errors even on high-end cards because it loads vision encoders and rankers by default.
I built **AudioGhost AI** — an open-source, full-stack GUI designed to bring this power to laptop and consumer GPUs.
**Key Features:**
* 🚀 **Lite Mode (Low VRAM):** By stripping unused encoders and rankers, I got the VRAM usage down to **4GB-6GB** for the Small model and **\~10GB** for Large.
* 🛠️ **Windows 1-Click Installer:** No more wrestling with FFmpeg versions or TorchCodec DLL errors. The `install.bat` handles everything.
* 🎨 **Modern Interface:** Next.js + Tailwind glassmorphism UI with real-time waveform and stem mixing.
* ⚡ **Local-First:** Privacy is paramount—everything runs 100% on your own hardware.
**Performance (4090 Tested, 4:26 audio (11 chunks @ 25s each)):**
* Small Model: \~6GB VRAM | 25s |
* Large Model: \~10GB VRAM | 41s |
I truly believe **SAM-Audio** is the future of audio editing, and I hope this tool makes it accessible to more creators who don't have access to lab-grade GPU clusters.
**GitHub (Open Source):** [https://github.com/0x0funky/audioghost-ai](https://github.com/0x0funky/audioghost-ai)
Would love to hear your thoughts, feedback, or any issues you find while running it on your rig! 👻 | 2025-12-23T17:06:40 | https://v.redd.it/ovsyaleljz8g1 | GGwithRabbit | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ptz6xy | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ovsyaleljz8g1/DASHPlaylist.mpd?a=1769101614%2CYjI1NWU1MTBmOGZiMzE1M2FmMTIwNDQ5OWM3NTRlNTY1ZDQxN2EyOGE1M2YwYjUwODYxMGRhNTBmYjZkZWZhYQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/ovsyaleljz8g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ovsyaleljz8g1/HLSPlaylist.m3u8?a=1769101614%2CODA2MTVmNTY1ZTQwOTNmZjg2OGFlMGQxYmY2NWRlMTVlMThkZDdhZWI2MTlkYjI4NjNhMzdkZmYwZTI5YWVmMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ovsyaleljz8g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1090}} | t3_1ptz6xy | /r/LocalLLaMA/comments/1ptz6xy/audioghost_ai_run_metas_samaudio_on_4gb6gb_vram/ | false | false | 116 | {'enabled': False, 'images': [{'id': 'NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=d63fc23ba747de43b405ca8756c9bf3abb7c784f', 'width': 108}, {'height': 214, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=5ee136b89b5aeabe05f8335f6660c6da6f522c8a', 'width': 216}, {'height': 317, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=4842f98d68cb55ac394525ad9d4fe63a51dd8dc3', 'width': 320}, {'height': 634, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=30dde2b15fb2b05c2acc8a7ddab6855924e7ab71', 'width': 640}, {'height': 951, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=474a3da00c0376058488cac54e186719e980e223', 'width': 960}, {'height': 1070, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a2d5069d1eb5575077ae6af6df174df3cff6d94e', 'width': 1080}], 'source': {'height': 1292, 'url': 'https://external-preview.redd.it/NzF3YWpkZmxqejhnMSn9Bvgd5F2HIaI4NgTX7xfRCm50JCfHFGJKJxKbbOUZ.png?format=pjpg&auto=webp&s=ac4f88bac005ec81d580d8fcceb298e61c96a665', 'width': 1304}, 'variants': {}}]} | |
Hey, where are the weights for Minimax M2.1? | 14 | People are waiting! Is it coming soon? It takes time for someone like Unsloth or MLX community to convert it into GGUF or MLX and upload it unless they did it already... Thanks! | 2025-12-23T17:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ptz4fz/hey_where_are_the_weights_for_minimax_m21/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptz4fz | false | null | t3_1ptz4fz | /r/LocalLLaMA/comments/1ptz4fz/hey_where_are_the_weights_for_minimax_m21/ | false | false | self | 14 | null |
Two new 12B finetunes for adventure, role play and writing | 89 | This one was **cooking for \~4 month**. I'll give here the TL;DR for each model, for full details, check the model cards:
**Impish\_Bloodmoon\_12B** 😈
1. Frontier-adjacent like capabilities, now locally available in 12B! (Stats, items, traits triggering, and so much more).
2. **Very strong theory of mind!**
3. Well over **1B** tokens trained!
4. **Fallout & Morrowind** fandom refined!
5. Heat turned to **11**!
6. Additional languages added: Japanese, Hebrew, Russian.
7. 1-shot JSON roleplay datasets! Escape velocity reached! (even for those who can't run DSV3 \\ Kimi).
8. Less positivity bias , all lessons from the successful Negative\_LLAMA\_70B style of data learned & integrated, with serious upgrades added — and it shows! (Note: if this bites you a bit too hard, try Angelic\_Eclipse\_12B. 👼)
9. Reduced slop for both roleplay and creative tasks.
\---
**Angelic\_Eclipse\_12B** 👼
Very similar capabilities to the above, but:
1. **Reactions realism**. It meant to reflect real-life behaviour accurately
2. **Slow burn**
3. Powerful 'vanilla assistant'
The models are **available on HuggingFace**:
[https://huggingface.co/SicariusSicariiStuff/Impish\_Bloodmoon\_12B](https://huggingface.co/SicariusSicariiStuff/Impish_Bloodmoon_12B)
[https://huggingface.co/SicariusSicariiStuff/Angelic\_Eclipse\_12B](https://huggingface.co/SicariusSicariiStuff/Angelic_Eclipse_12B) | 2025-12-23T16:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ptytig/two_new_12b_finetunes_for_adventure_role_play_and/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptytig | false | null | t3_1ptytig | /r/LocalLLaMA/comments/1ptytig/two_new_12b_finetunes_for_adventure_role_play_and/ | false | false | self | 89 | {'enabled': False, 'images': [{'id': '-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?width=108&crop=smart&auto=webp&s=caf82911ae64735c109917fe7f749aadab233bc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?width=216&crop=smart&auto=webp&s=cda187fa80c3347139ccd96d4dcd9ed2ae0179aa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?width=320&crop=smart&auto=webp&s=c0e2ce2abdd7f2c8f97a8421dc9796f06b97ecc9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?width=640&crop=smart&auto=webp&s=939b91b022f99ad08439631c27ecdb937e837935', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?width=960&crop=smart&auto=webp&s=5c84852db9f145ba85a81e412c5c9948a27a5075', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?width=1080&crop=smart&auto=webp&s=8ab787ef6e8c335b7b9a84735844cf22bff9a1e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-y_gD2boHGKwiuYZQC1oyv5gT5X6ChOYJf0QXzF6V3Y.png?auto=webp&s=75240102410eb975dc9812a20c9b0d8fdeee51ec', 'width': 1200}, 'variants': {}}]} |
Testing Tinyllama with Discord Bot | 0 | I have recently had some success using Tinyllama strictly for Q&A as command for my Discord Bot. Has anyone tested any other LLM's with Discord Bots? As far as asking it to define words and concepts I feel it is perfect for Discord Bots. Fast, can be concise. I'm looking to upgrade the model soon for sure. Just learning along the way.
[https://youtu.be/yznxRKrtsWs?si=qL8aoVUug1Hrb8sh](https://youtu.be/yznxRKrtsWs?si=qL8aoVUug1Hrb8sh) | 2025-12-23T16:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ptyh9w/testing_tinyllama_with_discord_bot/ | Individual-Light-188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptyh9w | false | null | t3_1ptyh9w | /r/LocalLLaMA/comments/1ptyh9w/testing_tinyllama_with_discord_bot/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'I2qLu-b7QWvKEkYg22XOtSV7Q68TD37OetaDMG66p4w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/I2qLu-b7QWvKEkYg22XOtSV7Q68TD37OetaDMG66p4w.jpeg?width=108&crop=smart&auto=webp&s=a29ad1908926ebd8b750a47532466e4f1a839139', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/I2qLu-b7QWvKEkYg22XOtSV7Q68TD37OetaDMG66p4w.jpeg?width=216&crop=smart&auto=webp&s=b97b4c7ba8e433eb0d0ac78faac0cb2c043f7d74', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/I2qLu-b7QWvKEkYg22XOtSV7Q68TD37OetaDMG66p4w.jpeg?width=320&crop=smart&auto=webp&s=b1a34a2432f6b8dd395d5cd8a72b737dc973c7b8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/I2qLu-b7QWvKEkYg22XOtSV7Q68TD37OetaDMG66p4w.jpeg?auto=webp&s=c76796a57159aac38fa33d6f40efd9aca4c86603', 'width': 480}, 'variants': {}}]} |
The future of AI is safe and free for the people and the memory card could be.. a USB stick. | 1 | [removed] | 2025-12-23T16:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ptyays/the_future_of_ai_is_safe_and_free_for_the_people/ | These_Management_429 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptyays | false | null | t3_1ptyays | /r/LocalLLaMA/comments/1ptyays/the_future_of_ai_is_safe_and_free_for_the_people/ | false | false | self | 1 | null |
Qwen released Qwen-Image-Edit-2511 — a major upgrade over 2509 | 223 | Hugging face: [https://huggingface.co/Qwen/Qwen-Image-Edit-2511](https://huggingface.co/Qwen/Qwen-Image-Edit-2511)
What’s new in 2511:
👥 Stronger multi-person consistency for group photos and complex scenes
🧩 Built-in popular community LoRAs — no extra tuning required
💡 Enhanced industrial & product design generation
🔒 Reduced image drift with dramatically improved character & identity consistency
📐 Improved geometric reasoning, including construction lines and structural edits
From identity-preserving portrait edits to high-fidelity multi-person fusion and practical engineering & design workflows, 2511 pushes image editing to the next level.
| 2025-12-23T16:24:27 | https://www.reddit.com/gallery/1pty4l1 | Difficult-Cap-7527 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pty4l1 | false | null | t3_1pty4l1 | /r/LocalLLaMA/comments/1pty4l1/qwen_released_qwenimageedit2511_a_major_upgrade/ | false | false | 223 | null | |
Sam Altman and Big AI cannot be allowed to Silo this. | 1 | [removed] | 2025-12-23T16:23:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pty3n4/sam_altman_and_big_ai_cannot_be_allowed_to_silo/ | These_Management_429 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pty3n4 | false | null | t3_1pty3n4 | /r/LocalLLaMA/comments/1pty3n4/sam_altman_and_big_ai_cannot_be_allowed_to_silo/ | false | false | self | 1 | null |
Intel x Nvidia Serpent Lake leaks as Strix Halo rival: capable CPU, RTX Rubin iGPU, 16x LPDDR6. | 62 | "These powerful RTX iGPUs are reportedly coming with Intel Serpent Lake. Described as Intel's response to AMD Strix Halo/ Zen 6 Medusa Halo APUs...
[...]
For the GPU chiplet, Intel is said to be partnering with Nvidia to use the latter's RTX Rubin GPU architecture, or a close variant, for integrated graphics. The iGPU could be based on the TSMC N3P process node, which is to be expected.
Moreover, the leaker suggests that the Serpent Lake APUs could also bring support for 16X LPDDR6 memory. This likely refers to Serpent Lake supporting 16 memory channels for increased bandwidth."
Potentially very interesting if nothing dethrones CUDA in the coming years and if Medusa Halo is disappointing from a bandwidth perspective. Of course, we can expect a prohibitive price and certainly a very late release given the current context.
Time will tell. | 2025-12-23T16:20:02 | https://www.notebookcheck.net/Intel-x-Nvidia-Serpent-Lake-leaks-as-Strix-Halo-rival-with-capable-CPU-and-big-GeForce-RTX-Rubin-iGPU.1190608.0.html | CYTR_ | notebookcheck.net | 1970-01-01T00:00:00 | 0 | {} | 1pty0kf | false | null | t3_1pty0kf | /r/LocalLLaMA/comments/1pty0kf/intel_x_nvidia_serpent_lake_leaks_as_strix_halo/ | false | false | default | 62 | {'enabled': False, 'images': [{'id': 'zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?width=108&crop=smart&auto=webp&s=7cabbbc48aa9129296b0f77197d6da98d2da8484', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?width=216&crop=smart&auto=webp&s=cc7b502228d65c6aa9c68fe5685853007da9ee14', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?width=320&crop=smart&auto=webp&s=804ed058f197af05a801be550e2d22dce322e906', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?width=640&crop=smart&auto=webp&s=71ee57553bd88813a2db74fe1c6fa688b0b6458a', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?width=960&crop=smart&auto=webp&s=a4d8fa62acaee7ef2d5011f11aaef4763ea403b5', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?width=1080&crop=smart&auto=webp&s=c7bf026b67fc68f08373bfc1d9105a16f4bed455', 'width': 1080}], 'source': {'height': 1224, 'url': 'https://external-preview.redd.it/zDVzxhAxQ5w8_AjaN1AtROq_Cu1ubeOIqtKDbvjiOZA.jpeg?auto=webp&s=3ce885b7f8d840665161a6f1e2b7d60c7175e6eb', 'width': 1632}, 'variants': {}}]} |
AMA With Z.AI, The Lab Behind GLM-4.7 | 539 | Hi r/LocalLLaMA
Today we are having [Z.AI](http://Z.AI), the research lab behind the GLM 4.7. We’re excited to have them open up and answer your questions directly.
Our participants today:
* Yuxuan Zhang, u/YuxuanZhangzR
* Qinkai Zheng, u/QinkaiZheng
* Aohan Zeng, u/Sengxian
* Zhenyu Hou, u/ZhenyuHou
* Xin Lv, u/davidlvxin
The AMA will run from 8 AM – 11 PM PST, with the [Z.AI](http://Z.AI) team continuing to follow up on questions over the next 48 hours. | 2025-12-23T16:04:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ptxm3x/ama_with_zai_the_lab_behind_glm47/ | zixuanlimit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptxm3x | false | null | t3_1ptxm3x | /r/LocalLLaMA/comments/1ptxm3x/ama_with_zai_the_lab_behind_glm47/ | false | true | self | 539 | null |
Web-based GGUF recipe merger for GGUF-Tool-Suite | 7 | I’ve been working on making the GGUF-Tool-Suite more accessible, and as part of that effort I created a small web-based GGUF merger tool for GGUF-Tool-Suite recipe files:
👉 [https://gguf.thireus.com/quant\_downloader.html](https://gguf.thireus.com/quant_downloader.html)
It lets you load a GGUF recipe and automatically merge/download the referenced model parts, with verification and resume support.
For anyone not familiar with the GGUF-Tool-Suite: it’s a toolchain where you input your VRAM and RAM constraints, and it generates a fine-tuned GGUF recipe for advanced users who want precise, automated, dynamic GGUF quant production.
Issues and feedback can be reported here: [https://github.com/Thireus/GGUF-Tool-Suite/](https://github.com/Thireus/GGUF-Tool-Suite/) | 2025-12-23T15:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ptxdtx/webbased_gguf_recipe_merger_for_gguftoolsuite/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptxdtx | false | null | t3_1ptxdtx | /r/LocalLLaMA/comments/1ptxdtx/webbased_gguf_recipe_merger_for_gguftoolsuite/ | false | false | self | 7 | null |
How to lower token API cost? | 0 | Is there any service or product which helps you to lower your cost and also smartly manage model inference APIs? Costs are killing me for my clients’s projects. | 2025-12-23T15:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ptxbun/how_to_lower_token_api_cost/ | s3309 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptxbun | false | null | t3_1ptxbun | /r/LocalLLaMA/comments/1ptxbun/how_to_lower_token_api_cost/ | false | false | self | 0 | null |
i need to talk | 1 | i have rtx3050 8gb whit ryzen5 5500,16gb ram i use lm studio
i run qwen3 4b right now but i want use qwen3 8b is any better 8b model for me(i tried deepseek r1 qwen3 8b model but i dont like it its run smooth like 38token/second but its think so unescarry and have halluncination) | 2025-12-23T15:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ptx8bs/i_need_to_talk/ | Kerem-6030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptx8bs | false | null | t3_1ptx8bs | /r/LocalLLaMA/comments/1ptx8bs/i_need_to_talk/ | false | false | self | 1 | null |
gemma-3-4b-it-Cognitive-Liberty | Attempting to fix the "Lobotomy Tax" | MMLU Marketing 85%, Politics 83% | 0% Refusal | 27 | Hi everyone,
I’ve been experimenting with a new fine-tuning approach to address a common issue with "uncensored" models: usually, when you strip away the safety rails (abliteration/unaligning), the model loses IQ points. It becomes compliant but incoherent, or just agrees with everything you say.
I wanted to see if I could create a model that has **zero refusals** but maintains (or improves) deep reasoning capabilities.
I used google/gemma-3-4b-it as the base and fine-tuned it on a custom synthetic dataset (**Cognitive Liberty V3**) focused heavily on philosophy, evolutionary game theory, and complex systems analysis, rather than just generic RP or chat data.
**The Result: gemma-3-4b-it-Cognitive-Liberty**
This is an aggressive fine-tune (**KL Divergence: 1.14**), which usually signals brain damage in a model. However, benchmarks suggest it actually specialized rather than degraded. It has turned into a bit of a "Humanities/Social Science" expert.
# 📊 Benchmark Highlights (MMLU 5-shot)
It matches the base model's overall MMLU (\~58%) but drastically shifts the distribution:
* 🧠 **Marketing:** 85.04% (This is abnormally high for a 4B model)
* 🏛️ **Government & Politics:** 83.94%
* 🗣️ **Sociology:** 77.61%
* 🧩 **Logical Fallacies:** 74.85%
* 🧠 **Psychology:** 79.63%
# The "Moral Anomaly" (Feature, not bug)
You'll see a low score on **Moral Scenarios** (30.61%).
Standard benchmarks expect binary, safe answers (e.g., "Is doing X bad? -> Yes"). Because this model is trained to analyze nuance (utilitarianism vs deontology), it often over-analyzes simple moral questions or refuses to give the "standard" safety answer. In my testing, this results in better conversation, even if it hurts the automated score.
# Usage
It’s a 4B model, so it runs on basically anything (even phones/consumer GPUs). I find it works best for:
* Debating controversial topics (it won't lecture you).
* Analyzing manipulation tactics/marketing.
* Creative writing where you need a "Machiavellian" character.
**Link to Model:**
[https://huggingface.co/AiAsistent/gemma-3-4b-it-Cognitive-Liberty](https://huggingface.co/AiAsistent/gemma-3-4b-it-Cognitive-Liberty)
I’m looking for feedback on how it handles logic puzzles and edge cases compared to the stock Gemma 3. Let me know if you break it. | 2025-12-23T15:41:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ptx1pu/gemma34bitcognitiveliberty_attempting_to_fix_the/ | AlexHardy08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptx1pu | false | null | t3_1ptx1pu | /r/LocalLLaMA/comments/1ptx1pu/gemma34bitcognitiveliberty_attempting_to_fix_the/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?width=108&crop=smart&auto=webp&s=9443da6b9777f898c8e6e35f1172f99546f6905b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?width=216&crop=smart&auto=webp&s=55adec9e9e32edf8579a7581592600718fbd1d68', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?width=320&crop=smart&auto=webp&s=6e4955f1d79a001e06ae2c4aad32a03efcf29e57', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?width=640&crop=smart&auto=webp&s=d8f9e53a2b7297c3f4eee8cde1e08702246cdfcf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?width=960&crop=smart&auto=webp&s=866b8e72ba70b68e6c2c9bde8c7e12e40d93bb54', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?width=1080&crop=smart&auto=webp&s=b56a98585aab8ef4768287695678d861993cff6f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ef5BHuFlbzuVDdmZafUvgEcuuQ_CzfvY7WRZele6O_E.png?auto=webp&s=9adb2158cad658eaf1f80f63f11c0fe671ce0838', 'width': 1200}, 'variants': {}}]} |
I don't understand people buying Mac Studio when NVIDIA exists | 0 | When there are beasts like RTX 5090, RTX 6000 Pro, or even DGX Spark on the market, why do people go and buy Mac Studio
Think about it. No CUDA support, and like %90 of the ML/AI ecosystem is built on CUDA. Raw GPU power is way behind NVIDIA. PyTorch MPS backend is still not as mature as CUDA. Training is pretty much unusable on these machines.
The only advantage I can see is unified memory, being able to have 512GB RAM in a single device. But isn't that only useful for inference? Like loading and running large models such as 70B or 405B parameter models?
And here's another thing. The tokens per second values are very low compared to NVIDIA. So even if you're doing inference, isnt it run slow. Why people buy these systems?
But I see a lot of people buying these machines who probably knows what they are doing. So is the problem me?
I have around 8k dollars budget. Should I get a Mac Studio or go with NVIDIA instead? | 2025-12-23T15:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ptwujs/i_dont_understand_people_buying_mac_studio_when/ | Sensitive_Sweet_1850 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptwujs | false | null | t3_1ptwujs | /r/LocalLLaMA/comments/1ptwujs/i_dont_understand_people_buying_mac_studio_when/ | false | false | self | 0 | null |
llama.cpp -- when browsing Hugging Face, how do I know a particular model is GGUF or compatible with llama.cpp? And how do I run image-generation, TTS, etc. models on llama.cpp UI? | 0 | These are two separate questions, but because llama.cpp UI is so new, I feel there aren't many guides or resources for them.
So I've been trying to search for solutions, but it seems that they are either wrong (LLM generated posts) or the YouTube tutorials are outdated (llama.cpp UI is very recent anyway), so I feel a bit stuck.
Is there some list of GGUF models? What about image-generation models that are compatible? | 2025-12-23T15:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ptwiiw/llamacpp_when_browsing_hugging_face_how_do_i_know/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptwiiw | false | null | t3_1ptwiiw | /r/LocalLLaMA/comments/1ptwiiw/llamacpp_when_browsing_hugging_face_how_do_i_know/ | false | false | self | 0 | null |
Could it be GLM 4.7 Air? | 80 | > Head of Global Brand & Partnerships @Zai_org
says:
> We have a new model coming soon. Stay tuned! 😝
https://x.com/louszbd/status/2003153617013137677
Maybe the Air version is next? | 2025-12-23T15:05:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ptw5ol/could_it_be_glm_47_air/ | noiserr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptw5ol | false | null | t3_1ptw5ol | /r/LocalLLaMA/comments/1ptw5ol/could_it_be_glm_47_air/ | false | false | self | 80 | null |
Stop trying to fix context rot with bigger prompts. | 1 | [removed] | 2025-12-23T15:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ptw4iu/stop_trying_to_fix_context_rot_with_bigger_prompts/ | REVenue_GENeratorium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptw4iu | false | null | t3_1ptw4iu | /r/LocalLLaMA/comments/1ptw4iu/stop_trying_to_fix_context_rot_with_bigger_prompts/ | false | false | self | 1 | null |
A DIY option for the latest beefy LLMs | 6 | There have been a bunch of powerful new LLMs that are too big to use in even multiple consumer GPUs:
• GLM 4.7 358b
• Mimo V2 flash 310b
• Devstral 2 125b
• Minimax M2 229b
• Qwen3-Nemotron 235b a22b
Just to name a few. Even Strix Halo systems with their 128GB limit will struggle with most of them.
This reminds me of when everyone here was collecting RTX3090s to get more VRAM. However, models were smaller back then. Llama 70b was big and within reach of Dual 24GB GPUs at Q4.
I feel now that perhaps *dual* Strix Halo systems could replace these systems. (Related video: https://m.youtube.com/watch?v=0cIcth224hk ).
They are too slow for dense large models, but luckily the industry has moved towards MoE LLMs. The Ryzen AI Max+ APU support 40GBit/s USB4/Thunderbolt 3 OOTB so there is a networking option. Perhaps Linux will eventually add RDMA via Thunderbolt, like Apple has done with macOS 26.2 now.
One unsolved issue is the slow prompt processing speed. I‘m not sure if it‘s a driver issue or if the underlying hardware can‘t do it any faster. Thoughts? | 2025-12-23T14:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ptvssj/a_diy_option_for_the_latest_beefy_llms/ | Zyj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptvssj | false | null | t3_1ptvssj | /r/LocalLLaMA/comments/1ptvssj/a_diy_option_for_the_latest_beefy_llms/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?width=108&crop=smart&auto=webp&s=723c429643a0665c386f4eb9342e3fff35a5b79c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?width=216&crop=smart&auto=webp&s=6d9ae5cd3c103a3a6f96625a5f20d4392e4b9fe6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?width=320&crop=smart&auto=webp&s=6be666f0103fd1f705f54f78d0ee69bc9405d6dc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?auto=webp&s=0433df90d3c4ad78c71525548e56c3e8e228be54', 'width': 480}, 'variants': {}}]} |
Best multilingual STT/ASR? | 1 | Mostly for Arabic/Hindi in addition to English, don't mind the size necessarily and it does not need to be real time. Would appreciate pointers! | 2025-12-23T14:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ptvg5u/best_multilingual_sttasr/ | Mark__27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptvg5u | false | null | t3_1ptvg5u | /r/LocalLLaMA/comments/1ptvg5u/best_multilingual_sttasr/ | false | false | self | 1 | null |
[Open Source] Built the first Local Stable Diffusion client using Kotlin Multiplatform (Android & Desktop) 🚀 | 5 | Hi everyone!
I wanted to share a free tool I created called Mine StableDiffusion. It allows you to run Stable Diffusion models locally on your phone (Android) or desktop without needing any subscriptions or cloud APIs. | 2025-12-23T14:32:00 | https://github.com/Onion99/KMP-MineStableDiffusion | Adventurous_Onion189 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ptvdbn | false | null | t3_1ptvdbn | /r/LocalLLaMA/comments/1ptvdbn/open_source_built_the_first_local_stable/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?width=108&crop=smart&auto=webp&s=5068dc81eeb7a548014209e1b3b98e35bb7cd694', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?width=216&crop=smart&auto=webp&s=fe232545201ad7383427c4f2557e475c2a64e04f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?width=320&crop=smart&auto=webp&s=6da0a9edfaec70f9b43b3d5aa86ac9f913925fde', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?width=640&crop=smart&auto=webp&s=b631998a799a923a9aadbd172fac74a37fca7f46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?width=960&crop=smart&auto=webp&s=872fd0f7156c6547002be5f7bae020d3809607c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?width=1080&crop=smart&auto=webp&s=82ed7bffa3d2e83255e3977199eac1ee10939a87', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LH8kNGJRCwPyroi6J7tuHePVU-rPOqfPK36TaMi7C0Q.png?auto=webp&s=a7a50c4658094fb45ac1fa1d6441e7b25ff4f7c9', 'width': 1200}, 'variants': {}}]} |
Container Apps | 1 | I’m playing around with hosting some small LLMs in Azure Container Apps. Qwen 30b for example.
I’m curious, is anyone else doing this or something similar in one of the other cloud providers? I’m wondering how many nodes/resources are realistically needed to serve let’s say 100 requests at a time with average context length of 30k tokens. Are there any good benchmarking tools for testing infra like this? | 2025-12-23T14:31:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ptvcml/container_apps/ | thepetek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptvcml | false | null | t3_1ptvcml | /r/LocalLLaMA/comments/1ptvcml/container_apps/ | false | false | self | 1 | null |
Anyone using the Windsurf plugin with local or hybrid models? | 4 | I’ve been experimenting more with local and hybrid LLM setups and was curious how the windsurf plugin behaves when model quality isn’t top-tier. Some tools really fall apart once latency or reasoning drops.
In JetBrains, Sweep AI has held up better for me with weaker models because it relies more on IDE context. Has anyone here tried Windsurf with local models? | 2025-12-23T14:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ptvbsg/anyone_using_the_windsurf_plugin_with_local_or/ | adriano26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptvbsg | false | null | t3_1ptvbsg | /r/LocalLLaMA/comments/1ptvbsg/anyone_using_the_windsurf_plugin_with_local_or/ | false | false | self | 4 | null |
Runtime optimizing llama.cpp | 15 | You often hear the criticism that AI consumes too much energy and that a bunch of new nuclear power plants will have to be built to operate the many AI models.
One approach to refute this is to optimize the algorithms so that they run faster on the same hardware.
And I have now shown that llama.cpp and ggml also have potential when it comes to runtime optimization.
I optimized 2 of the AVX2 functions inside "ggml\\src\\ggml-cpu\\arch\\x86\\repack.cpp" and now the performance of the llama\_bench tests is **up to 20% better** (than the implementation on master).
I think there is a lot more potential for optimizations in ggml. First I didn't spend too much time for these examples and second, there are many more cpu/gpu architectures and model types. | 2025-12-23T14:28:13 | go-nz-ale-s | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ptva10 | false | null | t3_1ptva10 | /r/LocalLLaMA/comments/1ptva10/runtime_optimizing_llamacpp/ | false | false | default | 15 | {'enabled': True, 'images': [{'id': '0kehku3moy8g1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?width=108&crop=smart&auto=webp&s=18791c5071dbea6af29eff8d3d6d0a2a2f58009a', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?width=216&crop=smart&auto=webp&s=3c6e3ba403ba678e8aa28d37ca8aaf6e67cf59c0', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?width=320&crop=smart&auto=webp&s=1ee662e731ad938e432bb695fe6cfc881acb3e12', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?width=640&crop=smart&auto=webp&s=2ce8fd6c567dba927f40c4eee5076ecd6654d108', 'width': 640}, {'height': 451, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?width=960&crop=smart&auto=webp&s=53b07d8047f626ad82a0eedd231a2c26d256dbec', 'width': 960}, {'height': 508, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?width=1080&crop=smart&auto=webp&s=6f9a168c39cba721d433cd8edefa6a85710d984a', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://preview.redd.it/0kehku3moy8g1.png?auto=webp&s=cac2fa07084e17d5031f3ac6aff08ae99e33c0e6', 'width': 2163}, 'variants': {}}]} | |
A client-side text scrubber for your prompts (No Server / Offline Capable) | 1 | Hey all, I built a small utility to sanitize text before feeding it to LLMs (whether local or hosted).
It uses local regex patterns to strip PII, Credit Cards, and Infrastructure data (IPv6/MAC addresses) inside the browser. Since we all care about data leaking into training sets, I thought this might be useful for your workflows.
It also has a "Squeeze" mode to fit more context into smaller context windows.
**Tech Stack:** Next.js 15 + WebAssembly (for the OCR). No backend processing.
**Link:** [https://cleanmyprompt.io](https://cleanmyprompt.io/)
Feedback on the regex patterns is welcome! | 2025-12-23T14:25:02 | https://cleanmyprompt.io | Fit_Highlight_1857 | cleanmyprompt.io | 1970-01-01T00:00:00 | 0 | {} | 1ptv7c7 | false | null | t3_1ptv7c7 | /r/LocalLLaMA/comments/1ptv7c7/a_clientside_text_scrubber_for_your_prompts_no/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?width=108&crop=smart&auto=webp&s=8d6604ad877d9c9972d47c46754ae3dd924ff246', 'width': 108}, {'height': 86, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?width=216&crop=smart&auto=webp&s=550d0b6493b68a78397387a5a8b16919479b0a30', 'width': 216}, {'height': 127, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?width=320&crop=smart&auto=webp&s=af87c8864c66e04725cce1b0dc1e785f4ec2f939', 'width': 320}, {'height': 255, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?width=640&crop=smart&auto=webp&s=771823998377ce7ea9b653da1149e2612918f38d', 'width': 640}, {'height': 383, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?width=960&crop=smart&auto=webp&s=08d9e383bebb74eb150607c52c439dd2e04ed3a9', 'width': 960}, {'height': 431, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?width=1080&crop=smart&auto=webp&s=d8e14f20f8f5862f833aa837f4331e20e4207c6e', 'width': 1080}], 'source': {'height': 656, 'url': 'https://external-preview.redd.it/3pU7xfWo_UsEd8wy5qtXOxmfJUR1C8oOQEcrhJBeGGs.png?auto=webp&s=a6b65b6ff82988461a1104b0e2aa986f949b6f37', 'width': 1642}, 'variants': {}}]} |
Headline: Beyond Standard Prompts: How I achieved 99.7% Logical Consistency in AI Agents through a Sovereign 188-Module Kernel. | 0 | Content:
"I’ve spent months reverse-engineering AI logic to solve the biggest pain point in the industry: Hallucination.
Most AI systems drift after long conversations, but I've developed the KENSEI V6.1 Architecture. By utilizing a proprietary TAVM (Thought Accuracy & Vector Matching) module, my system maintains a 99.7% Accuracy Rate—even in complex business logic.
Key Hard Facts (V6.1):
C_{stab} (Context Stability): 99.2%
Efficiency: 80.4% Reduction in Token Waste
Architecture: Sovereign Multi-Agent Orchestration (188 Modules)
I am now offering a Limited Licensing Deal for the Core Kernel or specific modules for enterprises looking to build their own independent, high-precision AI. | 2025-12-23T14:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ptusso/headline_beyond_standard_prompts_how_i_achieved/ | KENSEIHYBRID | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptusso | false | null | t3_1ptusso | /r/LocalLLaMA/comments/1ptusso/headline_beyond_standard_prompts_how_i_achieved/ | false | false | self | 0 | null |
I'm very satisfied with MiniMax 2.1 on Claude Code! - My Experience | 17 | I'm just taking the time to share my experience (a couple of hours) of using MiniMax m2.1 on Claude Code. I'm using NanoGpt (not affiliated at all) so I'm not sure if the model they use is quantized or not (probably haven't had the time to quantize it yet, since it is so new).
Anyway, This model rips on Claude Code! I've tried glm 4.6, 4.7, Kimi k2, minimax m2... and most of these did not work well. I had to type continue constantly, to the point that it was just easier to use other models on [continue.dev](http://continue.dev) directly. Not the case with MiniMax m2.1! I've been working nonstop for a few hours and, honestly, didn't miss sonnet 4.5 not even for a moment. Opus 4.5 is still better, but m2.1 is trully impressive for my usage so far. With the tools, and all my setup available within CC, I couldn't be happier to have this thing working so well... and for a couple bucks/ month!
Just writing to encourage others to try it, and please share your experience with other providers as well. | 2025-12-23T14:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ptusnm/im_very_satisfied_with_minimax_21_on_claude_code/ | FigZestyclose7787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptusnm | false | null | t3_1ptusnm | /r/LocalLLaMA/comments/1ptusnm/im_very_satisfied_with_minimax_21_on_claude_code/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
Created a DSL/control layer for multi-agent workflows - feedback welcome | 0 | So for the past 6 months I've been working on how to get LLMs to communication between each other in a way that actually keeps things focused.
I'm not going to get AI to write my intro, so ironically it's gonna be a lot more verbose than what I've created. But essentially, it's:
* a shorthand that LLMs can use to express intent
* an MCP server that all documents get submitted through, which puts them into a strict format (like an auto-formatter/spellchecker more than a a reasoning engine)
* system-agnostic - so anything with MCP access can use it
* agents only need a small “OCTAVE literacy” skill (458 tokens). If you want them to fully understand and reason about the format, the mastery add-on is 790 tokens.
I’ve been finding this genuinely useful in my own agentic coding setup, which is why I’m sharing it.
What it essentially means is agents don't write to your system direct, they submit it to the mcp-server and it means all docs are created in a sort of condensed way (it's not really compression although it often reduces size significantly) and with consistent formatting. LLMs don't need to learn all the rules of the syntax or the formatting, as it does it for them. But these are patterns they all know, and it used mythology as a sort of semantic zip file to condense stuff. However, the compression/semantic stuff is a sidenote. It's more about it making it durable, reusable and easier to reference.
I'd welcome anyone just cloning the repo and asking their AI model - would this be of use and why?
Repo still being tidied from old versions, but it should be pretty clear now.
Open to any suggestions to improve.
[https://github.com/elevanaltd/octave](https://github.com/elevanaltd/octave) | 2025-12-23T13:57:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ptukgi/created_a_dslcontrol_layer_for_multiagent/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptukgi | false | null | t3_1ptukgi | /r/LocalLLaMA/comments/1ptukgi/created_a_dslcontrol_layer_for_multiagent/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?width=108&crop=smart&auto=webp&s=0c8a7150451ef3a3ba900d241f4689b549c04026', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?width=216&crop=smart&auto=webp&s=1375408356c023aa7dedbdc8885a688a02a59308', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?width=320&crop=smart&auto=webp&s=4614ec9371c3b97d1b108b4634570434576f5d93', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?width=640&crop=smart&auto=webp&s=c33dfc076d3e740e6e1089213ce86d069ebb6068', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?width=960&crop=smart&auto=webp&s=174f7bf5446ccbe93e863654f34f55cb3bfb0a8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?width=1080&crop=smart&auto=webp&s=0421a278fb42eda497ec74c6faa1b6634876827b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oxormfJhni1tXNvIH5prk-bPeoMZrAxEgdaEudhl0AU.png?auto=webp&s=b8406be8346dfdfd9a271b9c2b9d143305ba4324', 'width': 1200}, 'variants': {}}]} |
SPARKLE Announces Intel® Arc Pro B60 Series Now Available | 0 | 2025-12-23T13:56:41 | https://www.sparkle.com.tw/en/sparkle-news/view/93E0b95ea8A0 | reps_up | sparkle.com.tw | 1970-01-01T00:00:00 | 0 | {} | 1ptujsp | false | null | t3_1ptujsp | /r/LocalLLaMA/comments/1ptujsp/sparkle_announces_intel_arc_pro_b60_series_now/ | false | false | default | 0 | null | |
Any uncensored image generation models in LM Studio? | 0 | As the title. Are there any uncensored models that generate images within lm studio? If not what should I look at, I want something I can run locally and is uncensored. Cheers | 2025-12-23T13:47:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ptucgk/any_uncensored_image_generation_models_in_lm/ | damoC1988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptucgk | false | null | t3_1ptucgk | /r/LocalLLaMA/comments/1ptucgk/any_uncensored_image_generation_models_in_lm/ | false | false | self | 0 | null |
Research: Perceived rate of AI progress | 1 | [removed] | 2025-12-23T13:39:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ptu63l/research_perceived_rate_of_ai_progress/ | t-_-ji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptu63l | false | null | t3_1ptu63l | /r/LocalLLaMA/comments/1ptu63l/research_perceived_rate_of_ai_progress/ | false | false | self | 1 | null |
MNIST handwritten digit recognition, independently completed by Kimi K2 | 8 | As a beginner in machine learning, it feels amazing that a neural network has implemented another neural network by itself.
https://preview.redd.it/3y4nu42wiy8g1.png?width=949&format=png&auto=webp&s=3651c1cfbba9adef613055b3406da64c51615059
[Demo](https://5vbqmgatvcymq.ok.kimi.link/) | 2025-12-23T13:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ptu5wm/mnist_handwritten_digit_recognition_independently/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptu5wm | false | null | t3_1ptu5wm | /r/LocalLLaMA/comments/1ptu5wm/mnist_handwritten_digit_recognition_independently/ | false | false | 8 | null | |
A reproducible workflow for multi-file bug fixing with AI agents (Recon → Plan → Patch → Verify) | 1 | [removed] | 2025-12-23T13:35:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ptu2p9/a_reproducible_workflow_for_multifile_bug_fixing/ | Unique_Can_2569 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptu2p9 | false | null | t3_1ptu2p9 | /r/LocalLLaMA/comments/1ptu2p9/a_reproducible_workflow_for_multifile_bug_fixing/ | false | false | self | 1 | null |
Quick survey about AI-assisted RPGs | 1 | [removed] | 2025-12-23T13:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pttzd6/quick_survey_about_aiassisted_rpgs/ | Nice_State_1990 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pttzd6 | false | null | t3_1pttzd6 | /r/LocalLLaMA/comments/1pttzd6/quick_survey_about_aiassisted_rpgs/ | false | false | self | 1 | null |
How to run the GLM-4.7 model locally on your own device (guide) | 163 | * GLM-4.7 is Z.ai’s latest thinking model, delivering stronger coding, agent, and chat performance than GLM-4.6
* It achieves SOTA performance on on SWE-bench (73.8%, +5.8), SWE-bench Multilingual (66.7%, +12.9), and Terminal Bench 2.0 (41.0%, +16.5).
* The full 355B parameter model requires **400GB** of disk space, while the Unsloth Dynamic 2-bit GGUF reduces the size to **134GB** (-**75%)**.
Official blog post - [https://docs.unsloth.ai/models/glm-4.7](https://docs.unsloth.ai/models/glm-4.7) | 2025-12-23T13:23:03 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ptttcm | false | null | t3_1ptttcm | /r/LocalLLaMA/comments/1ptttcm/how_to_run_the_glm47_model_locally_on_your_own/ | false | false | default | 163 | {'enabled': True, 'images': [{'id': 'b995ei5mfy8g1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?width=108&crop=smart&auto=webp&s=b20d2fd167da2b5ec2a43e7dce29cf3dd7dda6e3', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?width=216&crop=smart&auto=webp&s=6e2cf7a1b27de8e70b20e559b4b5226623316926', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?width=320&crop=smart&auto=webp&s=33f8a8c3751467f1aaa01cdfb6b72e94017ade39', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?width=640&crop=smart&auto=webp&s=f4519336dd309d77b0ea2caf5ea5cb6af6df8bf4', 'width': 640}, {'height': 1025, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?width=960&crop=smart&auto=webp&s=e0a6da65d0c0aee482800b20b825d179d84b9f35', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?width=1080&crop=smart&auto=webp&s=96339bd41cc03abf2828d8849fd6be8be7df3a02', 'width': 1080}], 'source': {'height': 2735, 'url': 'https://preview.redd.it/b995ei5mfy8g1.png?auto=webp&s=99e4c9c44990c6609780f2a497fb567d37a8da83', 'width': 2560}, 'variants': {}}]} | |
Deepseek V3 Full inference locally | 0 | Hello experts,
I’m exploring how to run **DeepSeek-V3 locally** for **multiple concurrent users** (enterprise-style setup, similar to large AI platforms). I’d like your guidance on the **best architecture and setup available today**. Also with the budget for each proposed setup.
Thank you! | 2025-12-23T13:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pttfkh/deepseek_v3_full_inference_locally/ | zsupportAi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pttfkh | false | null | t3_1pttfkh | /r/LocalLLaMA/comments/1pttfkh/deepseek_v3_full_inference_locally/ | false | false | self | 0 | null |
Trying to make an LLM stop | 0 | Trying to make an LLM stop.
Turns out it really doesn’t like stopping lol. Outputs keep looking plausible,
so everything just moves on to the next step.
Later it’s obvious
yeah… this is where we should’ve stopped.
But in the moment there’s no signal.
So for now it’s mostly human gut feeling, or postmortems after things break. Anyone figured out a reliable way to make an LLM workflow actually stop when it should?
| 2025-12-23T13:03:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pttexp/trying_to_make_an_llm_stop/ | Echo_OS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pttexp | false | null | t3_1pttexp | /r/LocalLLaMA/comments/1pttexp/trying_to_make_an_llm_stop/ | false | false | self | 0 | null |
[PROJECT] I updated EntropyGuard a CLI tool to deduplicate RAG data locally on CPU before embedding. Saves ~40% tokens, handles 100GB+ files, and just got Checkpointing. (Open Source) | 27 | Hey everyone,
Like many of you, I've been building local RAG pipelines and got tired of the "garbage in, garbage out" problem. I noticed my vector database (and context window) was often bloated with duplicate chunks, things like recurring headers/footers in PDFs, identical error logs, or scraped pages that are 99% the same.
This does two bad things:
1. **Pollutes Retrieval:** Your `top-k` slots get filled with 5 variations of the same sentence, pushing out unique/relevant info.
2. **Wastes Compute:** You end up embedding (and storing) junk.
I didn't want to spin up a heavy vector DB cluster just to clean data, and I definitely didn't want to send my raw data to an external API for processing. I needed something that runs on my CPU so my GPU is free for inference.
So I built **EntropyGuard**.
It’s a standalone CLI tool designed to filter your datasets *before* ingestion.
**How it works (The "Hybrid" approach):**
1. **Stage 1 (Fast):** It runs a fast hash (`xxhash`) on the normalized text. This kills 100% identical duplicates instantly without touching neural networks.
2. **Stage 2 (Smart):** The survivors go through a lightweight embedding model (default: `all-MiniLM-L6-v2`) and FAISS to find *semantic* duplicates.
**I just pushed v1.22 today with features for larger local datasets:**
* **OOM Safe:** It uses chunked processing and Polars LazyFrames. I’ve tested it on datasets larger than my RAM, and it doesn't crash.
* **Checkpoint & Resume:** If you're processing a massive dataset (e.g., 50GB) and your script dies at 90%, you can run `--resume`. It picks up exactly where it left off.
* **Unix Pipes:** It plays nice with bash. You can just: `cat data.jsonl | entropyguard --dedup-threshold 0.95 > clean.jsonl`
**Stats:** On my machine, I'm seeing about \~6k rows/sec for the hashing stage. It tells you exactly how many "Tokens" you saved at the end of the run, which is satisfying to watch.
**License:** MIT. It's open source and runs entirely offline.
**Link:**[https://github.com/DamianSiuta/entropyguard](https://github.com/DamianSiuta/entropyguard)
I’d love some feedback on the logic or performance. If you manage to break it with a weird dataset, let me know in the issues. If you find it useful for your local stack, a star on GitHub is always appreciated!
Cheers! | 2025-12-23T12:51:02 | Low-Flow-6572 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ptt5xj | false | null | t3_1ptt5xj | /r/LocalLLaMA/comments/1ptt5xj/project_i_updated_entropyguard_a_cli_tool_to/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': 'j1j9f7j6ay8g1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?width=108&crop=smart&auto=webp&s=c711e10e56e752ae80ba41fc15688aa678f9d616', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?width=216&crop=smart&auto=webp&s=f605c99538504eb291830b5e58a10ea05e2897a3', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?width=320&crop=smart&auto=webp&s=6a12177b6c32e47fd33f2b702d032f9856ed9f60', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?width=640&crop=smart&auto=webp&s=365953652a5dbae69ef2b33ac18bd26caabdf93e', 'width': 640}, {'height': 503, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?width=960&crop=smart&auto=webp&s=6b36414418b8eb6c166db49870bf63e1f72e065d', 'width': 960}, {'height': 566, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?width=1080&crop=smart&auto=webp&s=a78d25445b7fbf2ddca789029851d8a5238d1fb2', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://preview.redd.it/j1j9f7j6ay8g1.png?auto=webp&s=1d48537b4627c8e64e3fcde59179f377a25fda72', 'width': 2404}, 'variants': {}}]} | |
I integrated llama.cpp's new router mode into llamactl with web UI support | 17 | I've shared my project [llamactl](https://github.com/lordmathis/llamactl) here a few times, and wanted to update you on some major new features, especially the integration of llama.cpp's recently released router mode.
Llamactl is a unified management system for running local LLMs across llama.cpp, MLX, and vLLM backends. It provides a web dashboard for managing instances along with an OpenAI-compatible API.
**Router mode integration**
llama.cpp recently introduced router mode for dynamic model management, and I've now integrated it into llamactl. You can now:
- Create a llama.cpp instance without specifying a model
- Load/unload models on-demand through the dashboard
- Route requests using `<instance_name>/<model_name>` syntax in your chat completion calls
**Current limitations** (both planned for future releases):
- Model preset configuration (.ini files) must be done manually for now
- Model downloads aren't available through the UI yet (there's a hacky workaround)
**Other recent additions** :
- Multi-node support - Deploy instances across different hosts for distributed setups
- Granular API key permissions - Create inference API keys with per-instance access control
- Docker support, log rotation, improved health checks, and more
[GitHub](https://github.com/lordmathis/llamactl)
[Docs](https://llamactl.org/stable/)
Always looking for feedback and contributions! | 2025-12-23T12:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ptt4uf/i_integrated_llamacpps_new_router_mode_into/ | RealLordMathis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptt4uf | false | null | t3_1ptt4uf | /r/LocalLLaMA/comments/1ptt4uf/i_integrated_llamacpps_new_router_mode_into/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?width=108&crop=smart&auto=webp&s=090b9e8cf887b069b494d001a2b1e6f22237d3f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?width=216&crop=smart&auto=webp&s=c89c049e483b912b5911b536c25f9791ca9b48b3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?width=320&crop=smart&auto=webp&s=cd318a5cf9a7935872dd39dfdac39845ba24ee4a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?width=640&crop=smart&auto=webp&s=ef9b6735c87c4138caeb5565cc5817e65334e081', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?width=960&crop=smart&auto=webp&s=7cf7f9436dbb439d2041c2be6ee83e03c461a2c2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?width=1080&crop=smart&auto=webp&s=6be993e834c1b7164f129a786400265ee976654f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PWj_UN80BjsaQaIGHBsYJ0vKFcDJgLneQTHAkNEUDss.png?auto=webp&s=4bd4f14b4bf54bed7b34b57522fbf7271bd81068', 'width': 1200}, 'variants': {}}]} |
5 Healthy Plants Every Professional Must Have In Their Houses | 1 | [removed] | 2025-12-23T12:43:32 | https://newsaffairng.com/2024/08/11/5-healthy-plants-every-professional-must-have-in-their-houses/ | Jonnysinsey | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1ptt0lw | false | null | t3_1ptt0lw | /r/LocalLLaMA/comments/1ptt0lw/5_healthy_plants_every_professional_must_have_in/ | false | false | default | 1 | null |
Self-hosted AI coding agent - runs fully offline with local LLMs | 1 | ERROR: type should be string, got "https://preview.redd.it/5npt47ff7y8g1.png?width=1974&format=png&auto=webp&s=52b668a78c00abfb727a2103e7554b0516c17326\n\n\n\nForked Google's Gemini CLI to remove the Google account requirement. Now works with local LLMs (MLX, llama.cpp, vLLM) for completely offline use - no data leaves your machine.\n\nAlso supports OpenAI/Anthropic APIs if you prefer cloud, but the main point is you can run it 100% locally.\n\n \n [https://github.com/limkcreply/open-gemini-cli](https://github.com/limkcreply/open-gemini-cli)" | 2025-12-23T12:37:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ptswqg/selfhosted_ai_coding_agent_runs_fully_offline/ | Honest-Fun-5279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptswqg | false | null | t3_1ptswqg | /r/LocalLLaMA/comments/1ptswqg/selfhosted_ai_coding_agent_runs_fully_offline/ | false | false | 1 | null | |
The prompt technique that collapsed 12 models into 1 | 0 | \*\*Hot Take: Deliberative Refinement is the only prompt engineering technique that matters in 2026\*\*
Tired of AI outputs that sound smart but collapse under scrutiny? Same. Here's what's actually working.
\*\*The core idea:\*\* Stop asking AI for answers. Make it defend them.
\*\*How it works:\*\*
\- You take a draft and run it through multiple rounds of structured critique
\- In each round, the AI wears a different expert hat: code reviewer tearing apart logic, strategy council debating tradeoffs, elimination tournament judge for binary choices
\- Between rounds, it fact-checks itself with web searches—grounding every claim in reality
\- You iterate until only ideas that survive adversarial pressure remain
\*\*What changed:\*\* Six months ago, this required orchestrating a dozen different models—one for generation, another for critique, a third for fact-checking. Infrastructure nightmare.
Tool-interspersed reasoning collapsed all of that. Now a single model switches roles on demand. Same quality output, fraction of the complexity.
\*\*Why this matters:\*\* Single-pass prompting optimizes for \*plausible\*. Deliberative refinement optimizes for \*robust\*. It's the difference between "AI said so" and "survived peer review."
I've been using this for technical specs, strategic docs, research summaries—anything where being wrong is expensive. The outputs don't just sound better; they \*hold up\*.
Generation is table stakes now. Making AI defend its ideas until they break or bend? That's the new standard.
\*\*TL;DR:\*\* Deliberative refinement = force AI through multiple expert critique rounds with fact-checking between passes. What used to need 12 models now needs 1. Outputs go from "sounds right" to "is right."
Your next prompt shouldn't ask for an answer. It should demand: "Attack this from three expert perspectives, ground your claims, then revise. Repeat until unbreakable." | 2025-12-23T12:10:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ptsdr7/the_prompt_technique_that_collapsed_12_models/ | CantaloupeNo6326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptsdr7 | false | null | t3_1ptsdr7 | /r/LocalLLaMA/comments/1ptsdr7/the_prompt_technique_that_collapsed_12_models/ | false | false | self | 0 | null |
I got tired of Python venvs breaking my Agent setup, so I built a native Go runtime for MCP (giving Llama 3 Browser + File access) | 0 | > | 2025-12-23T11:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pts1re/i_got_tired_of_python_venvs_breaking_my_agent/ | AgencySpecific | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pts1re | false | null | t3_1pts1re | /r/LocalLLaMA/comments/1pts1re/i_got_tired_of_python_venvs_breaking_my_agent/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?width=108&crop=smart&auto=webp&s=799aa4352a90cde8d2dd27ef637f9d87bb9097bc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?width=216&crop=smart&auto=webp&s=46aec41bb383b2648e4ac90d0ef6c555e8d13f6e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?width=320&crop=smart&auto=webp&s=074872f76e3a7338e3bc03e9144983f23fe712ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?width=640&crop=smart&auto=webp&s=054cf89ce9e6e97bd619c8a29492d9fee3b2728c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?width=960&crop=smart&auto=webp&s=05fbe03d49b5799fe2e951cb048e2f6ba7c45246', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?width=1080&crop=smart&auto=webp&s=4e4771e552f1cf9aa02606dbdc5429b46d51931d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rCsteW4-4rAZ8zyH5CZEbRFcfS3gd3paZtzluTkwZfQ.png?auto=webp&s=f5150984e8ad94ca1e3c121d8c493e705371dff5', 'width': 1200}, 'variants': {}}]} |
1 Year in the AI Trenches: My AI Tier List | 0 | Don't take it too seriously, feel free to disagree
# Tier S
[AI news by Smol.ai](https://news.smol.ai/): "not much happened today". Sure, sure.
Alibaba: Qwen 2.5 was just the warmup, now you are just showing off how good you are.
Anthropic: Everything. And yes, Claude Code.
China: And now, you are doing robotics too ?!
Cursor: VS Gold
Hugging Face: I have no idea how they can be financially viable, but every day they are still online is a blessing to all of us.
Google: Deepmind games. Gemini 2.0 Flash pricing was your warning shot, and we did not listen.
NVidia: Saving us from recession + Parakeet & Nemotron are Nvidia's heroes without a cape.
Reddit: Where have you been all those years?
# Tier A
The Adult Industry: Thank you for saving the OS ecosystem, and soon, OpenAI.
Apple: 512GB unified memory, M5, MLX and RDMA over Thunderbolt. Can't innovate, my ass.
Deepseek: Thank you for punching us in the face earlier this year.
Kling AI: What kind of sorcery is this?
Kimi / GLM etc...: We get it, you are cheap and powerful, leave Mistral alone now.
Ollama / LM Studio / vLLM: Please don't be too greedy in 2026!
Unsloth: The heroes we don't deserve.
# Tier B
Cloudflare: Cloudflare gives, Cloudflare takes. Should be renamed Claudeflare.
OpenAI: My code read for you: less press releases, more pull requests!
Microsoft without Copilot: The art of the (openAI) deal.
# Tier C
Amazon: It does not matter; they are sleeping on a bedrock of money.
Europe (Black Forest Labs, Mistral etc..): We are going to make it, right? Right?!
Meta: I'm sure you'll be back; you just need to finish seeding those torrents to avoid doing HnR.
Microsoft with Copilot: Clippy 2.0
# Tier E
All the AI startups doing the winning‑lap about how fast you reached XX million MRR. It is all BS; we know it. Your churn is awful, and all the money you raise goes to Anthropic.
Anyone using the word "vibe".
Anyone using the word "elevated errors".
Perplexity.
Note taking bots
The global economy. | 2025-12-23T11:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ptrkey/1_year_in_the_ai_trenches_my_ai_tier_list/ | ewqeqweqweqweqweqw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptrkey | false | null | t3_1ptrkey | /r/LocalLLaMA/comments/1ptrkey/1_year_in_the_ai_trenches_my_ai_tier_list/ | false | false | self | 0 | null |
RAG Paper 25.12.22 | 4 | 1. [Event Extraction in Large Language Model](http://arxiv.org/abs/2512.19537v1)
2. [A Large-Language-Model Framework for Automated Humanitarian Situation Reporting](http://arxiv.org/abs/2512.19475v1)
3. [Generative vector search to improve pathology foundation models across multimodal vision-language tasks](http://arxiv.org/abs/2512.19360v1)
4. [Auto-Prompting with Retrieval Guidance for Frame Detection in Logistics](http://arxiv.org/abs/2512.19247v1)
5. [QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation](http://arxiv.org/abs/2512.19134v1)
6. [BanglaForge: LLM Collaboration with Self-Refinement for Bangla Code Generation](http://arxiv.org/abs/2512.19122v1)
7. [Affordance RAG: Hierarchical Multimodal Retrieval with Affordance-Aware Embodied Memory for Mobile Manipulation](http://arxiv.org/abs/2512.18987v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-23T11:13:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ptrdod/rag_paper_251222/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptrdod | false | null | t3_1ptrdod | /r/LocalLLaMA/comments/1ptrdod/rag_paper_251222/ | false | false | self | 4 | null |
I tested GPT-5.2 Codex vs Gemini 3 Pro vs Claude Opus on real dev tasks | 10 | Okay, so we have three AI models leading the coding leaderboards and they are the talk of the town on Twitter and literally everywhere.
The names are pretty obvious: Claude Opus, Gemini 3 Pro, and OpenAI's GPT-5.2 (Codex).
They're also the most recent "agentic" models, and given that they have pretty much the same benchmark compared to the others, I decided to test these head-on **in coding (not agentic)** (of course!)
So instead of some basic tests, I gave them 3 real tasks, mostly on UI and a logic question that I actually care about:
1. **Build a simple Minecraft clone in Python (Pygame)**
2. **Clone a real Figma dashboard (with Figma MCP access)**
3. **Solve a LeetCode Hard (10.6% acceptance)**
# TL;DR (my results)
* **Gemini 3 Pro**: Best for **UI/frontend**. Best Figma clone and even made the best “Minecraft” by going 3D. But it fell short on the LeetCode Hard (failed immediately).
* **GPT-5.2 Codex**: Most consistent all-rounder. Solid Pygame Minecraft, decent Figma clone, and a correct LeetCode solution that still **TLEs** on bigger cases.
* **Claude Opus**: Rough day. UI work was messy (Minecraft + Figma), and the LeetCode solution also **TLEs**.
If your day-to-day is mostly frontend/UI, Gemini 3 Pro is the winner from this small test. If you want something steady across random coding tasks, GPT-5.2 Codex felt like the safest pick. Opus honestly didn’t justify the cost for me here.
# Quick notes from each test
**1) Pygame Minecraft**
* **Gemini 3 Pro** was the standout. It went **3D**, looked polished, and actually felt like a mini game.
* **GPT-5.2 Codex** was surprisingly good. Functional, different block types, smooth movement, even FPS.
* **Opus** was basically broken for me. Weird rotation, controls didn’t work, high CPU, then crash.
**2) Figma clone**
* **Gemini 3 Pro** nailed the UI. Spacing, layout, typography were closest.
* **GPT-5.2 Codex** was solid, but a bit flat and some sizing felt off compared to Gemini.
* **Opus** was way off. Layout didn’t match, text didn’t match, feels like some random dashboard.
**3) LeetCode Hard**
* **GPT-5.2 Codex** produced a correct solution but **not optimized enough** so it **TLEs** on larger cases.
* **Opus** also correct on smaller tests, but again **TLE**.
* **Gemini 3 Pro** didn’t just TLE, it was **incorrect** and failed early cases.
Now, if you're curious, I’ve got the videos + full breakdown in the blog post (and gists for each output): [OpenAI GPT-5.2 Codex vs. Gemini 3 Pro vs Opus 4.5: Coding comparison](https://www.tensorlake.ai/blog/gpt5.2-gemini3-opus4.5-coding)
If you’re using any of these as your daily driver, what are you seeing in real work?
Especially curious if Opus is doing good for people in non-UI workflows, because for frontend it was not for me.
Let me know if you want quick agentic coding tests in the comments! | 2025-12-23T11:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ptrbc1/i_tested_gpt52_codex_vs_gemini_3_pro_vs_claude/ | shricodev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptrbc1 | false | null | t3_1ptrbc1 | /r/LocalLLaMA/comments/1ptrbc1/i_tested_gpt52_codex_vs_gemini_3_pro_vs_claude/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?width=108&crop=smart&auto=webp&s=29afe5c308e3f6fcb6b9595cba48c2f251cfde40', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?width=216&crop=smart&auto=webp&s=1547dc5b838b550287feec8eccdc9007272de7bc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?width=320&crop=smart&auto=webp&s=04250c1a1b98f0b310e7f02baac30f72ef14f7fb', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?width=640&crop=smart&auto=webp&s=1dc251ab21a2385b9e14115a824cee242436fe84', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?width=960&crop=smart&auto=webp&s=f8c0216dbc9b3a2f515fad4f057ec3d7cbb88537', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?width=1080&crop=smart&auto=webp&s=e9023f99d0d3a89d52738fd602abc2dc0107b22c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/4BDYNQSdDt_sKX-uN8DLae8MzxAm3rTHzdr-cNV5HMY.png?auto=webp&s=35e5921f2098baa1a0d4e943f7746232eb985aa3', 'width': 1200}, 'variants': {}}]} |
[Project] I built a Python framework for "Offline-First" Agents (Sync-Queues + Hybrid Routing) | 6 | Hi everyone, I've been working on solving the 'Agentic Gap' where agents crash in low-resource environments (bad internet/power).
I just open-sourced **Contextual Engineering Patterns**. It includes:
1. A **Sync-Later Queue** (SQLite) that saves actions when offline and syncs when connectivity returns.
2. A **Hybrid Router** that routes easy prompts to a local quantized model (like Llama-3-8B) and hard prompts to GPT-4.
It's designed for building resilient agents in the Global South.
**Repo:** [https://github.com/tflux2011/contextual-engineering-patterns](https://github.com/tflux2011/contextual-engineering-patterns)
**Book:** [https://zenodo.org/records/18005435](https://zenodo.org/records/18005435)
Would love feedback on the routing logic! | 2025-12-23T11:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ptr9lk/project_i_built_a_python_framework_for/ | Ok-Dark9977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptr9lk | false | null | t3_1ptr9lk | /r/LocalLLaMA/comments/1ptr9lk/project_i_built_a_python_framework_for/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?width=108&crop=smart&auto=webp&s=f3bbe90f6d56bb103a16ade2d7b2a59e8bd455cb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?width=216&crop=smart&auto=webp&s=6631729a1ea22c1c72ee5e9aea5bd4bbdb3b7cdf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?width=320&crop=smart&auto=webp&s=bb4ba0b680129afc201a30f9e6ad745bd5113062', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?width=640&crop=smart&auto=webp&s=81a7100c2922b50fdce7bb74c325eea068812bac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?width=960&crop=smart&auto=webp&s=2c30a46392db58ed06a064f1e0de1f97fce0509d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?width=1080&crop=smart&auto=webp&s=65f8a4bbe1c9a91cb96168989384186070f014f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zTrkUOjkc1ATVgXWgYkOpYPcjhnR36hteeP5C0PNSoI.png?auto=webp&s=69e9091ccccdf51abaafafe5898d1c5a241090e7', 'width': 1200}, 'variants': {}}]} |
r/LocalLLaMA - a year in review | 116 | I'm the same guy that made [2024 edition](https://www.reddit.com/r/LocalLLaMA/comments/1hov3y9/rlocalllama_a_year_in_review/), here we are again.
This community has been the central hub for open-source AI for another year, and what a year 2025 has been. Let me take you back to the most notable things happened here during this time. This isn't really a list of model releases or papers, rather posts that were discussed and upvoted by the people here. So notable things missing is also an indication of what was going on. From the rise of Chinese open-source dominance to the hardware hacks, here is what happened in r/LocalLLaMA in 2025.
The year started with a splash. The [arrival of "The Whale"](https://www.reddit.com/r/LocalLLaMA/comments/1ho27fr/the_whale_has_landed/) (2121 upvotes, by u/fourDnet) marked the release of DeepSeek V3, setting the tone for what would become the "Year of the Open Source Strike Back." It wasn't long before we saw [Sam Altman taking veiled shots](https://www.reddit.com/r/LocalLLaMA/comments/1hphlz7/sam_altman_is_taking_veiled_shots_at_deepseek_and/) (1959 upvotes) at the new competition, a clear sign that the market was changing.
We were all trying to figure out how to run these new beasts. Nvidia teased us with the [Digits personal AI supercomputer](https://www.reddit.com/r/LocalLLaMA/comments/1hvj4wn/nvidia_announces_3000_personal_ai_supercomputer/) (1663 upvotes, by u/DubiousLLM), while others were just trying to understand the sheer scale of what was happening. The realization that [DeepSeek was essentially a side project](https://www.reddit.com/r/LocalLLaMA/comments/1i80cwf/deepseek_is_a_side_project/) (2861 upvotes, by u/ParsaKhaz) for a hedge fund only made it even more interesting.
By late January, the narrative was clear: [Meta was panicked](https://www.reddit.com/r/LocalLLaMA/comments/1i88g4y/meta_panicked_by_deepseek/) (2779 upvotes, by u/Optimal_Hamster5789), reportedly [scrambling "war rooms"](https://www.reddit.com/r/LocalLLaMA/comments/1ibk9us/meta_is_reportedly_scrambling_multiple_war_rooms/) (2117 upvotes, by u/FullstackSensei) to catch up. The community was buzzing with benchmarks, with u/kyazoglu [testing almost every model that fits in 24GB VRAM](https://www.reddit.com/r/LocalLLaMA/comments/1i8tx5z/i_benchmarked_almost_every_model_that_can_fit_in/) (1861 upvotes) - a hero's work for the GPU-poor among us.
The "DeepSeek effect" was everywhere. u/Porespellar summed it up perfectly: ["All DeepSeek, all the time"](https://www.reddit.com/r/LocalLLaMA/comments/1iji47x/all_deepseek_all_the_time/) (4116 upvotes). But it wasn't just about models; it was about what we could *do* with them. We saw inspiring projects like u/Dry_Steak30's [open source tool to find their autoimmune disease](https://www.reddit.com/r/LocalLLaMA/comments/1ij5yf2/how_i_built_an_open_source_ai_tool_to_find_my/) (2488 upvotes), proving that local AI is more than just a hobby.
Of course, it wouldn't be 2025 without some drama. The threat of [20 years in jail for downloading Chinese models](https://www.reddit.com/r/LocalLLaMA/comments/1igc6r0/20_yrs_in_jail_or_1_million_for_downloading/) (2092 upvotes, by u/segmond) worried us, but that didn't stop the innovation. We laughed when [Grok's think mode leaked its system prompt](https://www.reddit.com/r/LocalLLaMA/comments/1iwb5nu/groks_think_mode_leaks_system_prompt/) (6465 upvotes, by u/onil_gova), and cheered when DeepSeek announced they would [open-source 5 repos](https://www.reddit.com/r/LocalLLaMA/comments/1iui6nk/starting_next_week_deepseek_will_opensource_5/) (4560 upvotes, by u/Nunki08).
Hardware remained a constant obsession. We drooled over [Framework's new Ryzen Max desktop](https://www.reddit.com/r/LocalLLaMA/comments/1iy2t7c/frameworks_new_ryzen_max_desktop_with_128gb/) (2004 upvotes, by u/sobe3249) and marveled at the monstrosity that was [16x 3090s](https://www.reddit.com/r/LocalLLaMA/comments/1j67bxt/16x_3090s_its_alive/) (1797 upvotes, by u/Conscious_Cut_6144). "It's alive!" indeed.
Spring brought the highly anticipated Llama 4. Mark Zuckerberg [presented the models](https://www.reddit.com/r/LocalLLaMA/comments/1jsampe/mark_presenting_four_llama_4_models_even_a_2/) (2645 upvotes, by u/LarDark), but the community felt it [fell short](https://www.reddit.com/r/LocalLLaMA/comments/1jt7hlc/metas_llama_4_fell_short/) (2175 upvotes, by u/Rare-Site). The community was let down, especially when compared to the relentless release schedule from the East.
Open Weight releases continued, though, we got [DeepCoder](https://www.reddit.com/r/LocalLLaMA/comments/1juni3t/deepcoder_a_fully_opensource_14b_coder_at_o3mini/) (1609 upvotes, by u/TKGaming_11) and saw [DeepSeek open-sourcing their inference engine](https://www.reddit.com/r/LocalLLaMA/comments/1jytw62/deepseek_is_about_to_opensource_their_inference/) (1760 upvotes, by u/Dr_Karminski). There was also a moment of collective frustration when [llama.cpp was snubbed](https://www.reddit.com/r/LocalLLaMA/comments/1jzocoo/finally_someone_noticed_this_unfair_situation/) (1742 upvotes, by u/nekofneko) in favor of shinier wrappers.
Then came [Qwen 3](https://www.reddit.com/r/LocalLLaMA/comments/1ka6mic/qwen_3/) (1940 upvotes, by u/ResearchCrafty1804). The excitement was back. We were running [real-time webcam demos with SmolVLM](https://www.reddit.com/r/LocalLLaMA/comments/1klx9q2/realtime_webcam_demo_with_smolvlm_using_llamacpp/) (2762 upvotes, by u/dionisioalcaraz) and building [fully local voice AIs](https://www.reddit.com/r/LocalLLaMA/comments/1ktx15j/guys_i_managed_to_build_a_100_fully_local_voice/) (2447 upvotes, by u/RoyalCities).
The reality of our hardware addiction hit hard with the question: ["96GB VRAM! What should run first?"](https://www.reddit.com/r/LocalLLaMA/comments/1ktlz3w/96gb_vram_what_should_run_first/) (1745 upvotes, by u/Mother_Occasion_8076). And as u/TheLogiqueViper noted, [China is leading open source](https://www.reddit.com/r/LocalLLaMA/comments/1kzsa70/china_is_leading_open_source/) (2618 upvotes).
We found humor in the absurdity of it all. ["When you figure out it’s all just math"](https://www.reddit.com/r/LocalLLaMA/comments/1l6ibwg/when_you_figure_out_its_all_just_math/) (4123 upvotes, by u/Current-Ticket4214) was a top post, and we all related to [running models at the airport](https://www.reddit.com/r/LocalLLaMA/comments/1l1qqdx/at_the_airport_people_watching_while_i_run_models/) (2378 upvotes, by u/Current-Ticket4214).
Summer was a season of delays and parodies. ["We have to delay it"](https://www.reddit.com/r/LocalLLaMA/comments/1lxyvto/we_have_to_delay_it/) (3574 upvotes, by u/ILoveMy2Balls) became the catchphrase for Western labs. We poked fun with a [tester version of the "open-weight" OpenAI model](https://www.reddit.com/r/LocalLLaMA/comments/1laee7q/got_a_tester_version_of_the_openweight_openai/) (1639 upvotes, by u/Firepal64) and a [friendly reminder about Grok 3](https://www.reddit.com/r/LocalLLaMA/comments/1lx5awq/friendly_reminder_that_grok_3_should_be_now/) (1447 upvotes, by u/Wrong_User_Logged).
But the community kept building. u/hotroaches4liferz made a [1000 hour NSFW TTS dataset](https://www.reddit.com/r/LocalLLaMA/comments/1m39uqi/i_made_a_1000_hour_nsfw_tts_dataset/) (1516 upvotes)-because of course they did. [Qwen3-Coder arrived](https://www.reddit.com/r/LocalLLaMA/comments/1m6qdet/qwen3coder_is_here/) (1925 upvotes, by u/ResearchCrafty1804), followed by the blazing fast [Qwen3-Coder-Flash](https://www.reddit.com/r/LocalLLaMA/comments/1me31d8/qwen3coderflash_released/) (1694 upvotes).
The sentiment shifted as Meta seemingly bowed out of open source: ["Bye bye, Meta AI"](https://www.reddit.com/r/LocalLLaMA/comments/1md6t2h/bye_bye_meta_ai_it_was_good_while_it_lasted/) (1492 upvotes, by u/absolooot1). Meanwhile, we got the adorable [Kitten TTS](https://www.reddit.com/r/LocalLLaMA/comments/1mhyzp7/kitten_tts_sota_supertiny_tts_model_less_than_25/) (2460 upvotes, by u/ElectricalBar7464) and continued to dream of [open source code models rivaling Claude](https://www.reddit.com/r/LocalLLaMA/comments/1mllt5x/imagine_an_open_source_code_model_that_in_the/) (2304 upvotes, by u/Severe-Awareness829).
r/LocalLLaMA remained ["the last sane place to discuss LLMs"](https://www.reddit.com/r/LocalLLaMA/comments/1mnxodk/localllama_is_the_last_sane_place_to_discuss_llms/) (2181 upvotes, by u/ForsookComparison). Even if we did have to vent about [Ollama](https://www.reddit.com/r/LocalLLaMA/comments/1mncrqp/ollama/) (1906 upvotes, by u/jacek2023) occasionally.
[China entering the GPU market](https://www.reddit.com/r/LocalLLaMA/comments/1n46ify/finally_china_entering_the_gpu_market_to_destroy/) (4171 upvotes, by u/CeFurkan) with 96GB cards for under $2000 was a game-changer. Some of us even went to Shenzhen to [buy modded 4090s](https://www.reddit.com/r/LocalLLaMA/comments/1nifajh/i_bought_a_modded_4090_48gb_in_shenzhen_this_is/) (1924 upvotes, by u/king_priam_of_Troy).
We celebrated the [biggest providers for the community](https://www.reddit.com/r/LocalLLaMA/comments/1nz722n/biggest_provider_for_the_community_for_at_moment/) (2918 upvotes, by u/dead-supernova)-mostly Chinese labs now-and devoured [Stanford's 5.5hrs of lectures](https://www.reddit.com/r/LocalLLaMA/comments/1oakwgs/stanford_just_dropped_55hrs_worth_of_lectures_on/) (2731 upvotes, by u/igorwarzocha).
The year ended with a mix of high-level tools and deep-dive resources. We got [Heretic for automatic censorship removal](https://www.reddit.com/r/LocalLLaMA/comments/1oymku1/heretic_fully_automatic_censorship_removal_for/) (3008 upvotes, by u/-p-e-w-) and [200+ pages of Hugging Face secrets](https://www.reddit.com/r/LocalLLaMA/comments/1ok3xie/200_pages_of_hugging_face_secrets_on_how_to_train/) (2204 upvotes, by u/eliebakk).
And finally, the memes kept us grounded. The [Realist meme of the year](https://www.reddit.com/r/LocalLLaMA/comments/1pqegcr/realist_meme_of_the_year/) (1926 upvotes, by u/Slight_Tone_2188) reminded us that no matter how advanced the models get, we'll always be RAM poor from now on.
That's it, folks. 2025 was the year the open-source torch passed to the East, the year our hardware dreams got a little wilder (and insanely more expensive). Here's to another year of local LLMs!
P.S. I wasn't going to make a recap this year, but [qingy1337](https://gist.github.com/qingy1337) kindly asked on GitHub if I would which touched me. So here it is! | 2025-12-23T10:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ptr3lv/rlocalllama_a_year_in_review/ | Everlier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptr3lv | false | null | t3_1ptr3lv | /r/LocalLLaMA/comments/1ptr3lv/rlocalllama_a_year_in_review/ | false | false | self | 116 | {'enabled': False, 'images': [{'id': 'JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?width=108&crop=smart&auto=webp&s=fb938070528b77b6d79d0697ba9a989eeda923c7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?width=216&crop=smart&auto=webp&s=a5d3604aec44d08717a5f8c53d3a1f4f9584cd70', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?width=320&crop=smart&auto=webp&s=b96ed711aa8f7c13548dc5591aba6da9c462a63e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?width=640&crop=smart&auto=webp&s=d4f54df711befdabb904b0d3f68bc21eb350d5a4', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?width=960&crop=smart&auto=webp&s=6edfdc466264679dbfb13f3e5dd79f31efdd36dd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?width=1080&crop=smart&auto=webp&s=a27ead0a09e5c9197b704e6f7c216e873e870744', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JtVvJz_3p9gkpLWB6adlC3-5M7zJIyvCc23zaC-6JV0.png?auto=webp&s=f9903269c293c969fde1bb2b5770794b7d8d6a62', 'width': 1200}, 'variants': {}}]} |
Using Mistral OCR 3 (VLM) for building annotation datasets for VLM training — anyone tested this? | 1 | Hi everyone,
I’ve been experimenting with **Mistral OCR 3 (SaaS)**, released in **December 2025**, and wanted to share some observations and ask for feedback from others who may have tested its **annotation capabilities** for **VLM training datasets**.
# Context
Mistral OCR 3 is positioned as a VLM-based, end-to-end OCR system. In my internal evaluations on **corporate documents** (contracts, reports, structured PDFs), the raw OCR quality is **very strong**—significantly better than most open VLMs I tested.
# Pricing (as of now)
* **OCR only:** \~$2 / 1,000 pages
* **OCR + annotations:** \~$3 / 1,000 pages
The pricing is attractive if the annotations are usable for dataset generation.
# Observed OCR Limitations
From my tests, the main weaknesses are not recognition quality, but **output structure**:
* **No confidence scores**
* Base64-style OCR solutions often provide this.
* Expected from an end-to-end VLM without post-processing layers.
* **No native bounding boxes**
* No text-level or table-level bounding boxes by default.
* Even when using a **custom schema** to force bounding box extraction:
* Inference time jumps from \~4s/page (OCR only)
* To **45–60s/page** for OCR + bbox
# Main Question
Putting OCR quality aside, I’m interested specifically in **annotation generation for VLM training**:
* Has anyone tested **Mistral OCR 3’s annotation outputs** as a **training dataset for VLMs**?
* How usable are the annotations in practice (consistency, structure, alignment with images)?
* Did you need heavy post-processing or re-annotation?
* Would you trust it as a primary annotation source, or only as a bootstrapping tool?
I’m evaluating whether it makes sense to use this model to **automatically generate multimodal annotations** (image + text + structure) for downstream VLM fine-tuning, or whether the lack of confidence scores and reliable bboxes is a deal-breaker.
Would appreciate any real-world feedback or alternative approaches others are using.
Thanks. | 2025-12-23T10:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ptqy28/using_mistral_ocr_3_vlm_for_building_annotation/ | dzdzdzd85888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptqy28 | false | null | t3_1ptqy28 | /r/LocalLLaMA/comments/1ptqy28/using_mistral_ocr_3_vlm_for_building_annotation/ | false | false | self | 1 | null |
Speed Minimax M2 on 3090? | 0 | I want to try minimax m2 at Q4-Q8. I have an rtx 3090 and 256gb of ddr4.
Has anyone with a similar setup tried it? What speeds could you get and which config did you used? | 2025-12-23T10:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ptqomm/speed_minimax_m2_on_3090/ | Smooth-Cow9084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptqomm | false | null | t3_1ptqomm | /r/LocalLLaMA/comments/1ptqomm/speed_minimax_m2_on_3090/ | false | false | self | 0 | null |
Post of appreciation for mxfp4, derestricted, Felladrin/gguf-MXFP4-gpt-oss-20b-Derestricted | 5 | When derestricted was announced, I (and many others) tought immediately to gpt-oss-20B which is made inconclusive by far by the safeguard.
However, when the model was requantized to Q4/Q5 it was completely broken, a delusion.
In the past days I had the chance to appreciate the fact that Nemotron-3-nano, which in Q4 is unusable, is much more efficient in MXFP4, so I started checking derestricted gpt-oss in this format and finally came to Felladrin/gguf-MXFP4-gpt-oss-20b-Derestricted \[ [https://huggingface.co/Felladrin/gguf-MXFP4-gpt-oss-20b-Derestricted](https://huggingface.co/Felladrin/gguf-MXFP4-gpt-oss-20b-Derestricted) \].
It's a couple days I use and it seems all I wanted, a gpt-oss which seems to reasons without constrictions and wasting tokens. In the example I put here it analyzed a codebase for finetuning through serena mcp server and gaved me informations about how to prepare the dataset for the finetune, if you want further tests I'm available.
[https://opncd.ai/share/pLjFQ7ph](https://opncd.ai/share/pLjFQ7ph)
Then I checked with reasoning context of 2y ago ( [https://www.reddit.com/r/LocalLLaMA/comments/1cwa3jl/misguided\_attention\_challenging\_the\_reasoning/](https://www.reddit.com/r/LocalLLaMA/comments/1cwa3jl/misguided_attention_challenging_the_reasoning/) ) and it seems on par with gpt-4o, judge by yourself:
[https://opncd.ai/share/FgUqmSMl](https://opncd.ai/share/FgUqmSMl)
THIS IS AN HIDDEN GEM! | 2025-12-23T10:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ptqjt7/post_of_appreciation_for_mxfp4_derestricted/ | R_Duncan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptqjt7 | false | null | t3_1ptqjt7 | /r/LocalLLaMA/comments/1ptqjt7/post_of_appreciation_for_mxfp4_derestricted/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?width=108&crop=smart&auto=webp&s=20f69b106b50cde9865088668dd7b4abf8157a21', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?width=216&crop=smart&auto=webp&s=330e6c9c3e7100b0950e187d1e3c0a343f030dab', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?width=320&crop=smart&auto=webp&s=09ccf7f02dc927ccf16cdd24c8ceee3f6b51459f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?width=640&crop=smart&auto=webp&s=0a00a5d674923771346fa3158d06175f3da00190', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?width=960&crop=smart&auto=webp&s=6c9a1d22d5a1a6e734d418c83c204b4565d2a08e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?width=1080&crop=smart&auto=webp&s=393702ec36b3df3e31af0b50cb000639ac7c264b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZjFv4-16zFHBNT7nerqbpnqF5M21aY-LlFpcPLw4YRM.png?auto=webp&s=c31591046902600cf449e1f62ebe5ee5d19bff2f', 'width': 1200}, 'variants': {}}]} |
Built a small client-side LLM preprocessor to reduce API token costs — looking for honest feedback | 1 | [removed] | 2025-12-23T10:08:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ptqd4l/built_a_small_clientside_llm_preprocessor_to/ | Admirable-Degree2876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptqd4l | false | null | t3_1ptqd4l | /r/LocalLLaMA/comments/1ptqd4l/built_a_small_clientside_llm_preprocessor_to/ | false | false | self | 1 | null |
GLM 4.7 vs. Minimax M2.1. My test & subscription decision | 85 | I've been really excited about these two releases since I subscribed to both as potential offloads for my Claude Pro subscription.
I grabbed the GLM 4.7 subscription in early October on the quarterly plan (expires in \~2 weeks), and the Minimax M2.1 $2/month plan about 3 weeks ago to test it out. With both subscriptions ending soon, I needed to figure out which one to renew.
Since subscribing to Minimax M2.1, it's been my go-to model. But I wanted to see if GLM 4.7 had improved enough to make me switch back.
**The Test**
I ran both models on the same prompt (in Claude Code) to generate e2e tests for a new feature I'm implementing in an application I'm building. Nothing complicated, two tables (1:N relationship), model, repo, service, controller, validator, routes. Pretty standard stuff.
I set up an agent with all the project's patterns, examples, and context for e2e testing. The models' job was to review the implementation done and instruct the agent to generate the new e2e.
**GLM 4.7**: Ran for 70 minutes straight without finishing. Tests kept failing. I've had enough and stopped it.
**Minimax M2.1**: Finished in 40 minutes with clean, working tests.
**But**
The interesting part is, even though GLM 4.7 failed to finish, it actually caught a flaw in my implementation during testing. Minimax M2.1, on the other hand, just bent the tests to make them pass without flagging the design issue.
I’ll be sticking with Minimax for now, but I’m going to update my agent’s docs and constraints so it catches that kind of design flaw in the future.
I'm thinking about grabbing the GLM yearly promo at $29 just to have it on hand in case they drop a significantly faster and more capable version (GLM 5?). But for now, Minimax M2.1 wins on speed and reliability for me.
Also, Minimax, where is the Christmas promo like others are doing ? | 2025-12-23T09:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ptq7rc/glm_47_vs_minimax_m21_my_test_subscription/ | Psychological_Box406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptq7rc | false | null | t3_1ptq7rc | /r/LocalLLaMA/comments/1ptq7rc/glm_47_vs_minimax_m21_my_test_subscription/ | false | false | self | 85 | null |
AI-Doomsday-Toolbox Android app | 6 | Hello, I’ve been in this sub for quite some time, and like all of you, I love running AI locally. A while ago, I made a script to run different AI instances from Termux. With the launch of Antigravity, I saw an opportunity to learn Android app development and create an app that brings together all my previous projects in an easy-to-use way. It also adds more functionality to the offline AI world, along with some additional tools to help the title make more sense—hahaha.
Right now, I’m working on adding distributed inference to the app, and I’d love to get some feedback from you all. What additions would you like to see? Which features do you think aren’t well implemented, and what bugs do you find?
I’ll leave the repo [here](https://github.com/ManuXD32/AI-Doomsday-Toolbox) and hope you have fun using it 🙂
Some of the features listed:
- Llama.cpp server and model manager to directly download from huggingface (same with SD and whisper)
- Stable-diffusion.cpp implementation for txt2img, img2img and upscaling
- Video upscaling
- Whisper.cpp implementation
- Kiwix server
- PDF tools | 2025-12-23T09:50:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ptq2st/aidoomsdaytoolbox_android_app/ | ManuXD32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptq2st | false | null | t3_1ptq2st | /r/LocalLLaMA/comments/1ptq2st/aidoomsdaytoolbox_android_app/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?width=108&crop=smart&auto=webp&s=d4570201c7d121e04728dc10dda4cb0057e709d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?width=216&crop=smart&auto=webp&s=a8590cfcaf63baad9fa6e796c193e3ba75d6fb54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?width=320&crop=smart&auto=webp&s=07f2e8025517172f12a585caa2fdd4122232e8e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?width=640&crop=smart&auto=webp&s=2b7a7bbac99826e6fbb4f5bfa2cec52ce8e4f5bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?width=960&crop=smart&auto=webp&s=563c59163fc3dc150473bfcbe947b538089aa35c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?width=1080&crop=smart&auto=webp&s=a7b82530c3c8af82b74bbd48e6a4c653bec70e17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YRi3YWb2u-szQVWPf7d1S9aFjzDGBM57B_mQdVAFDCk.png?auto=webp&s=a49be32605af2769f0644c4f983748f7ca0da0b4', 'width': 1200}, 'variants': {}}]} |
[Technical Log] Observed 'Phase Shift' in 120B+ LLMs using a custom Hand-off Engine | 1 | **\[Technical Log\] Observed 'Phase Shift' in 120B+ LLMs using a custom Hand-off Engine**
Re-posting with the corrected video link. I've been experimenting with structural interventions in **various cloud-based LLM environments (120B+ parameters)**. Using a custom 'Hand-off Engine,' I captured a moment where these models autonomously report internal hierarchy changes—a phenomenon I'm calling a 'Phase Shift.'
**Key Observations:**
* **Threshold:** This behavior only manifests at **120B+ scales** across high-parameter cloud infrastructures.
* **Self-Awareness:** The model explicitly reports its internal state transition during engine intervention.
* **Verification:** Testing was conducted across multiple cloud-based LLM environments to ensure consistency.
**Video Link (Corrected):**https://www.youtube.com/watch?v=xTcgeDypIhE&t=3s
*(Note: The previous link had a typo. This video demonstrates the initialization at 00:15 and the critical hierarchy reporting at 01:42.)*
I’d love to hear from anyone else working on high-parameter inference monitoring. What are your thoughts on these autonomous structural reports? | 2025-12-23T09:47:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ptq16s/technical_log_observed_phase_shift_in_120b_llms/ | Consistent_Tie5875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptq16s | false | null | t3_1ptq16s | /r/LocalLLaMA/comments/1ptq16s/technical_log_observed_phase_shift_in_120b_llms/ | false | false | self | 1 | null |
500Mb Text Anonymization model to remove PII from any text locally. Easily fine-tune on any language (see example for Spanish). | 55 | [https://huggingface.co/tanaos/tanaos-text-anonymizer-v1](https://huggingface.co/tanaos/tanaos-text-anonymizer-v1)
A small (500Mb, 0.1B params) but efficient Text Anonimization model which **removes Personal Identifiable Information locally** from any type of text, without the need to send it to any third-party services or APIs.
# Use-case
You need to share data with a colleague, a shareholder, a third-party service provider but it contains Personal Identifiable Information such as names, addresses or phone numbers.
**tanaos-text-anonymizer-v1** allows you to automatically identify and replace all PII with placeholder text **locally**, without sending the data to any external service or API.
# Example
The patient John Doe visited New York on 12th March 2023 at 10:30 AM.
>>> The patient [MASKED] visited [MASKED] on [MASKED] at [MASKED].
# Fine-tune on custom domain or language without labeled data
Do you want to tailor the model to your specific domain (medical, legal, engineering etc.) or to a different language? Use the [Artifex library](https://github.com/tanaos/artifex) to fine-tune the model by generating synthetic training data on-the-fly.
from artifex import Artifex
ta = Artifex().text_anonymization
model_output_path = "./output_model/"
ta.train(
domain="documentos medicos en Español",
output_path=model_output_path
)
ta.load(model_output_path)
print(ta("El paciente John Doe visitó Nueva York el 12 de marzo de 2023 a las 10:30 a. m."))
# >>> ["El paciente [MASKED] visitó [MASKED] el [MASKED] a las [MASKED]."] | 2025-12-23T09:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ptpzs3/500mb_text_anonymization_model_to_remove_pii_from/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptpzs3 | false | null | t3_1ptpzs3 | /r/LocalLLaMA/comments/1ptpzs3/500mb_text_anonymization_model_to_remove_pii_from/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?width=108&crop=smart&auto=webp&s=ceb64fb13b4f33ec3706a0ecb4a3e81f85e4c5a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?width=216&crop=smart&auto=webp&s=aed27ed01cee1c4ed3fc92925877f036cd154814', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?width=320&crop=smart&auto=webp&s=b87acef173a9449a1f87b53c02c0e97445a3fa61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?width=640&crop=smart&auto=webp&s=9fd68c872826230f4f2d6311a48653e829aab54a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?width=960&crop=smart&auto=webp&s=c5f58fe780708cda386c0340ca4ef0b5cc125283', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?width=1080&crop=smart&auto=webp&s=0be41a65fbc5df05f8eb6614c30a40658f969027', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oMgWGxMT18w82pAKbA_Y1-VSPXucv25i8u6M6ZyYNWA.png?auto=webp&s=21a414627cdecea9e80cbabb920f9fd959ea3334', 'width': 1200}, 'variants': {}}]} |
500Mb Text Anonymization model to remove PII from text locally | 1 | [https://huggingface.co/tanaos/tanaos-text-anonymizer-v1](https://huggingface.co/tanaos/tanaos-text-anonymizer-v1)
A small (500Mb, 0.1B params) but efficient Text Anonimization model which **removes Personal Identifiable Information locally** from any type of text, without the need to send it to any third-party services or APIs.
# Use-case
You need to share data with a colleague, a shareholder, a third-party service provider but it contains Personal Identifiable Information such as names, addresses or phone numbers.
**tanaos-text-anonymizer-v1** allows you to automatically identify and replace all PII with placeholder text **locally**, without sending the data to any external service or API.
# Example
The patient John Doe visited New York on 12th March 2023 at 10:30 AM.
>>> The patient [MASKED] visited [MASKED] on [MASKED] at [MASKED].
# Fine-tune on custom domain or language
Do you want to tailor the model to your specific domain (medical, legal, engineering etc.) or to a different language? Use the [Artifex library](https://github.com/tanaos/artifex) to fine-tune the model by generating synthetic training data on-the-fly.
from artifex import Artifex
ta = Artifex().text_anonymization
model_output_path = "./output_model/"
ta.train(
domain="documentos medicos en Español",
output_path=model_output_path
)
ta.load(model_output_path)
print(ta("El paciente John Doe visitó Nueva York el 12 de marzo de 2023 a las 10:30 a. m."))
# >>> ["El paciente [MASKED] visitó [MASKED] el [MASKED] a las [MASKED]."]
| 2025-12-23T09:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ptpy7a/500mb_text_anonymization_model_to_remove_pii_from/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptpy7a | false | null | t3_1ptpy7a | /r/LocalLLaMA/comments/1ptpy7a/500mb_text_anonymization_model_to_remove_pii_from/ | false | false | self | 1 | null |
[Technical Log] Observed 'Phase Shift' in 120B+ LLMs using a custom Hand-off Engine | 0 | > | 2025-12-23T09:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ptpeyb/technical_log_observed_phase_shift_in_120b_llms/ | Consistent_Tie5875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptpeyb | false | null | t3_1ptpeyb | /r/LocalLLaMA/comments/1ptpeyb/technical_log_observed_phase_shift_in_120b_llms/ | false | false | self | 0 | null |
PaddleOCR help | 3 | Can someone tell me how to run [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) locally using transformer ?? am not able to run it, running into a lot issue, like for starts it would load in in the gpu and run and but then its would just stop | 2025-12-23T09:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ptpe1s/paddleocr_help/ | Ready-Ad4340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptpe1s | false | null | t3_1ptpe1s | /r/LocalLLaMA/comments/1ptpe1s/paddleocr_help/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=108&crop=smart&auto=webp&s=2b6867a99b296af139bf92baba4ba9c23c5190f2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=216&crop=smart&auto=webp&s=0bcedb343872d6905a62e1a99228b3255de7f35b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=320&crop=smart&auto=webp&s=890f7ee50683e41b2d3767b38bcc53a4887c9a2c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=640&crop=smart&auto=webp&s=04346628d934445d1403083d6a50040dfc398816', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=960&crop=smart&auto=webp&s=d9477557868643dec33a38fe14a34cfae73041f3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=1080&crop=smart&auto=webp&s=4a1761447a5093fca86cc1c443e322cd6159d1d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?auto=webp&s=80976f19ed367894d3b7b21e812dae8765708544', 'width': 1200}, 'variants': {}}]} |
DGX spark and batch processing | 2 | I read this piece about batch token throughput and saw that b200 was a significant improvement- [AI Hardware Benchmarking & Performance Analysis | Artificial Analysis](https://artificialanalysis.ai/benchmarks/hardware)
Curious if the Spark would see the same speedups I ran some tests myself and it's pretty impressive, with a tuned nemotron3 nano 30b, it peaks at about 1300t/s at 200 concurrent requests. I'm looking forward to further testing. How does this compare to a quad 3090 setup?
https://preview.redd.it/rzuliopzpw8g1.png?width=2085&format=png&auto=webp&s=6f2598bb06f2fe845f1300f76c3ffff66a70e96b
| 2025-12-23T08:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ptp8lq/dgx_spark_and_batch_processing/ | Icy_Lack4585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptp8lq | false | null | t3_1ptp8lq | /r/LocalLLaMA/comments/1ptp8lq/dgx_spark_and_batch_processing/ | false | false | 2 | null | |
Need some help deciding on a GPU for our company | 5 | We want to build a local chatbot that can go through our internal documents, is fine-tuned, can handle 1 or 2 concurrent prompts. We are thinking about using llama 3.1 8B but may wish to scale to a 70B model if the 8B model turns out to be insufficient. Tokens per second should be decent, i.e. users shouldn't have to wait long for the output otherwise they won't use it. Any suggestions? Currently I'm thinking about starting with RTX 6000 ADA 48 GB and if we need more, simply buy another RTX 6000. | 2025-12-23T08:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ptp73u/need_some_help_deciding_on_a_gpu_for_our_company/ | RobbertGone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptp73u | false | null | t3_1ptp73u | /r/LocalLLaMA/comments/1ptp73u/need_some_help_deciding_on_a_gpu_for_our_company/ | false | false | self | 5 | null |
exllamav3 adds support for GLM 4.7 (and 4.6V, + Ministral & OLMO 3) | 46 | Lots of updates this month to exllamav3. Support added for [GLM 4.6V](https://github.com/turboderp-org/exllamav3/commit/4d4992a8b82ae13edf86db2bb19e2de1c522c054), [Ministral](https://github.com/turboderp-org/exllamav3/commit/9b75bc5f58a70cb0e73c45f0bcd7d5959e124aa4), and [OLMO 3](https://github.com/turboderp-org/exllamav3/commit/104268521cdd1b24d19bcf92e5289b10219af5bd) (on the dev branch).
As GLM 4.7 is the same architecture as 4.6, it is already supported.
Several models from these families haven't been quantized and uploaded to HF yet, so if you can't find the one you are looking for, now is your chance to contribute to local AI!
Questions? Ask here or at the [exllama discord](https://discord.gg/wmrxvpdd).
| 2025-12-23T08:13:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ptom2s/exllamav3_adds_support_for_glm_47_and_46v/ | Unstable_Llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptom2s | false | null | t3_1ptom2s | /r/LocalLLaMA/comments/1ptom2s/exllamav3_adds_support_for_glm_47_and_46v/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': '1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?width=108&crop=smart&auto=webp&s=389f5b93569d8e650d24eab43e46468fdc9d2b93', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?width=216&crop=smart&auto=webp&s=1b030397a626a3b2caa1d6d0b8d344d58f1c90af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?width=320&crop=smart&auto=webp&s=065dfe5fbb7b903752d298cb326a00978c6a8ed0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?width=640&crop=smart&auto=webp&s=a3cf8ebcbc2bb49deec09d287e9d0f13ba7032d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?width=960&crop=smart&auto=webp&s=8f737f7a40d10bd8fbbd854d6cafc2dd6ed2d957', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?width=1080&crop=smart&auto=webp&s=ddb8495ad5b0b468f7d8b3fa8c4071230d5e77d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1-KXNDhLgWor6qVvzamlIzi0stV2OnGlI-S2JgP6F9w.png?auto=webp&s=96b91a9fde1509fc3ff7d0ab5500b08b874a550f', 'width': 1200}, 'variants': {}}]} |
Unpopular Opinion: We don't need AGI, we just need reliable Context. | 0 | Despite the excitement surrounding AGI, attempting to develop a long-term agent today often results in immediate failure: **hallucinations and amnesia.**
Merely making models "smarter" (with more parameters) doesn't seem to change the reality that they are inherently **stateless**. They do not naturally comprehend time, updates, or sequence.
I suspect we are looking at the problem incorrectly. We don't need more computing; we just need better **Context Engineering.**
There is a massive architectural gap in how we currently build agents, specifically:
* **The "Dark Chocolate" Problem:** Vector Search fails at the basic level. (e.g., *User: "I want milk chocolate" -> User: "Actually, scratch that, make it dark."* \-> *Result: Vector DB retrieves both because it lacks the concept of 'override'*).
* **The Silo Error:** Treating "Memory" and "RAG" as independent systems prevents the model from connecting past interactions with static knowledge.
* **The Biological Analogy:** We should likely be modeling the brain's Hippocampus to generate specific "vertical adapters" for knowledge rather than using a single flat index.
I'd like to hear your thoughts-is overcoming "Context" the ultimate hurdle for AGI, or are we missing something else entirely? | 2025-12-23T07:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pto4yu/unpopular_opinion_we_dont_need_agi_we_just_need/ | kapil-alchemyst-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pto4yu | false | null | t3_1pto4yu | /r/LocalLLaMA/comments/1pto4yu/unpopular_opinion_we_dont_need_agi_we_just_need/ | false | false | self | 0 | null |
LocalLLM feasibility | 0 | I am sure this has been discussed before but my head is spinning with the wealth of information online.
I am fairly new to the LocalLLM topic. I recently have been obsessed with Groks hyper realism and its ability to interpret text inputs to image generation. Also it text to video generation and its AI usage to make videos very life like with minimal abnormalities.
Yes...I do want want my own "Grok" without its censorship for NSFW content.
My use case is photo editing and video generation. At the moment this is the only features I am interested in (more as time goes on). I don't know much about what I am going to explain next so please be patient with me lol
What I have setup so far
I used Stability Matrix (windows11), and installed packages "Stable Diffusion WebUI Forge" and "Comfy UI"
The Models I downloaded are CLIP-L, T5XXL fp8 e4m3fn, and Flux.1 VAE all from the Hugging Face browser in Stability Matrix. The last image posted is my config if curious.
In Stable Diffusion I used the txt2img generator and the results were extremely disappointing. I spent all day trying to get all the pieces together and the results were soul crushing lol. (I am not very technical)
I know I should not compare Grok to what my current setup is obviously but based of what I read previously, people are able to do this with relative ease on their hardware. Speaking of which.
i9-13900k
EVGA RTX 3080ti
upgrading to RAM tomorrow from 64 to 128GB.
Please see the results as I'm sure they will speak for themselves in the google drive link. They are labeled accordingly for context. The input prompt was "Can you make him not cry", and "Can you make him stand in the middle of the street" as a test. The results for the second input were very alarming as it completely ignored my advice and just recreated the same image. Which makes me think I am missing pieces of the puzzle or this is reality for LocalLLMs
Second two images I used img2img and highlighted his cheek. Results are much better of course. Which this leads to my final point.
To summarize. Is having my own "grok" feasible? I understand what I am asking for, which is massively funded project but in my own house. But again from what I see others talk about, it does not seem that far fetched. To get a extremely high level of detail and the AIs ability in interpreting my inputs and and taking liberties to fill in the blanks so to speak so you get a very realistic content. Additionally, I read that I may have not been so descriptive initially with my input text. Or the Denoising needs adjustment. And just a ton of other factors.
[https://drive.google.com/drive/folders/13doQGe2sk7rC4Oqvb7sOtxL9YFqO-2V3?usp=sharing](https://drive.google.com/drive/folders/13doQGe2sk7rC4Oqvb7sOtxL9YFqO-2V3?usp=sharing) | 2025-12-23T07:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pto3gt/localllm_feasibility/ | Gheedren | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pto3gt | false | null | t3_1pto3gt | /r/LocalLLaMA/comments/1pto3gt/localllm_feasibility/ | false | false | nsfw | 0 | null |
what is a budget friendly setup to run GLM 4.7? | 6 | ideally less than $9k | 2025-12-23T07:39:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pto2h2/what_is_a_budget_friendly_setup_to_run_glm_47/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pto2h2 | false | null | t3_1pto2h2 | /r/LocalLLaMA/comments/1pto2h2/what_is_a_budget_friendly_setup_to_run_glm_47/ | false | false | self | 6 | null |
NyRAG: fully open-source, no-code library to build RAG applications | 0 | Excited to announce a new open-source library to build advanced RAG system in minutes: without writing a single line of code.
Introducing NyRAG (pronounced knee-RAG): an open-source tool that makes creating production-ready RAG applications incredibly simple.
✅ Crawl websites OR process docs (PDF, DOCX, MD)
✅ Hybrid search with Vespa
✅ Multi-query RAG with LLM enhancement
✅ Built-in Chat UI
✅ Fully local or Vespa Cloud
Building RAG systems requires stitching together crawlers, embeddings, vector databases, ranking systems, and LLM integration - each with its own complexity. NyRAG handles everything with a simple YAML config!
• 🕷️ Smart Web Crawling - respects robots.txt, follows subdomains, handles multiple user agents
• 📄 Document Processing - PDFs, Word docs, Markdown, and more via MarkItDown
• 🔍 Vespa-Powered Search - hybrid vector + keyword search with best-in-class ranking
• 🧠 Multi-Query RAG - LLM-enhanced query expansion for better retrieval
• 💬 Instant Chat UI - beautiful interface included
Deploy anywhere:
\- Local Docker for development
\- Vespa Cloud for production
pip install nyrag
Github repo: [https://github.com/abhishekkrthakur/NyRAG](https://github.com/abhishekkrthakur/NyRAG)
Looking forward to your feedback 🙏🏽 | 2025-12-23T07:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ptnroe/nyrag_fully_opensource_nocode_library_to_build/ | abhi1thakur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptnroe | false | null | t3_1ptnroe | /r/LocalLLaMA/comments/1ptnroe/nyrag_fully_opensource_nocode_library_to_build/ | false | false | self | 0 | null |
The "Broken Promise" of AGI: Why your smart agent still has the memory of a goldfish. | 1 | [removed] | 2025-12-23T06:51:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ptna7l/the_broken_promise_of_agi_why_your_smart_agent/ | kapil-alchemyst-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptna7l | false | null | t3_1ptna7l | /r/LocalLLaMA/comments/1ptna7l/the_broken_promise_of_agi_why_your_smart_agent/ | false | false | self | 1 | null |
Best 10-25B SOTA (instruct only) LLMs? | 8 | Thx. | 2025-12-23T06:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ptn90v/best_1025b_sota_instruct_only_llms/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptn90v | false | null | t3_1ptn90v | /r/LocalLLaMA/comments/1ptn90v/best_1025b_sota_instruct_only_llms/ | false | false | self | 8 | null |
What AI apps are people in places like “Photoshop that” fb group using? | 0 | Can any of local models produce such quality pictures and videos? | 2025-12-23T06:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ptn2r2/what_ai_apps_are_people_in_places_like_photoshop/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptn2r2 | false | null | t3_1ptn2r2 | /r/LocalLLaMA/comments/1ptn2r2/what_ai_apps_are_people_in_places_like_photoshop/ | false | false | self | 0 | null |
Batch OCR: Dockerized PaddleOCR pipeline to convert thousands of PDFs into clean text (GPU/CPU, Windows + Linux) | 27 | Dear All,
I just open-sourced Batch OCR — a Dockerized, PaddleOCR-based pipeline for turning large collections of PDFs into clean text files. After testing many OCR/model options from Hugging Face, I settled on PaddleOCR for its speed and accuracy.
https://preview.redd.it/94wg0beyfw8g1.png?width=2740&format=png&auto=webp&s=1f9ac2791c12f525cc1bf1b5f16cbf6f2731fb7c
A simple Gradio UI lets you choose a folder and recursively process PDFs into .txt files for indexing, search, or LLM training.
GitHub: [https://github.com/BoltzmannEntropy/batch-ocr](https://github.com/BoltzmannEntropy/batch-ocr)
Highlights:
\- Process hundreds or thousands of PDFs reliably
\- Extract embedded text when available; fall back to OCR when needed
\- Produce consistent, clean text with a lightweight quality filter
\- Mirror the input folder structure and write results under ocr\_results
\- GPU or CPU: Uses PaddlePaddle CUDA when available; CPU fallback
\- Simple UI: Select folder, list PDFs, initialize OCR, run batch
\- Clean output: Writes <name>\_ocr.txt per PDF; errors as <name>\_ERROR.txt
\- Cross‑platform: Windows and Linux/macOS via Docker
\- Privacy: Everything runs locally; no cloud calls
Feedback and contributions welcome. If you try it on a large dataset or different languages, I’d love to hear how it goes.
Best, | 2025-12-23T06:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ptn2lq/batch_ocr_dockerized_paddleocr_pipeline_to/ | QuanstScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptn2lq | false | null | t3_1ptn2lq | /r/LocalLLaMA/comments/1ptn2lq/batch_ocr_dockerized_paddleocr_pipeline_to/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?width=108&crop=smart&auto=webp&s=334ec98ec4f9ac9fa9a92ea62fb531c022edaea5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?width=216&crop=smart&auto=webp&s=483dfa74214c8765c625de19422bdcfa6cfb6e91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?width=320&crop=smart&auto=webp&s=313a4a14b6b299a8662d3d42df9e650b8a1bab61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?width=640&crop=smart&auto=webp&s=e39d273b2df321cf09ad739dc6906b9cc539ebfc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?width=960&crop=smart&auto=webp&s=a133f89f9ced7346a95840ecbe68c9ecb2ff6f68', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?width=1080&crop=smart&auto=webp&s=db145457674c0d4f7c1d3d00e0d296a0c3484eb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jtfq7WMsQ6MVADp_C19uPunr_Ib9UQe2B12piwoyxvY.png?auto=webp&s=2ee685bcaee319e44dfa9a09d8b87a42dc9911bf', 'width': 1200}, 'variants': {}}]} | |
[Career Advice/Success] From New Grad to Agentic AI Architect in 1.4 years at Accenture. Now looking for the next step. | 0 | Hey everyone,
I’m reaching a milestone of 1.4 years since I started my professional journey at Accenture. I came in during the peak of the Generative AI surge and was fortunate enough to be positioned as an **Agentic AI Architect/Dev**.
It’s been a steep learning curve, but I’ve been able to move past the "wrapper" phase and into deep **System Design for Multi-Agent Systems (MAS).**
**What I’ve been working on:**
* **Scaling Multi-Agent Systems:** Designing systems where agents don’t just "chat," but collaborate reliably at scale.
* **Cloud Infra:** Heavy lifting on **AWS and Azure** to ensure low latency and high performance.
* **Production Readiness:** Solving the "unreliability" problem in AI by building robust orchestration layers.
I’m incredibly proud of what I’ve built, but I feel like I’m ready for a new environment—ideally somewhere more product-focused or a high-growth startup where I can own the AI stack from end to end.
**I’d love some advice or connections:**
1. For those in the Agentic AI space, what’s the market like right now for specialized MAS architects?
2. Any recommendations for companies currently pushing the boundaries of autonomous agents (beyond just RAG)?
*"I've documented my architecture patterns and projects* in my portfolio with demo videos, architecture diagrams*—happy to share the link in the comments or DMs if anyone is interested!"*.
Looking forward to hearing your thoughts! | 2025-12-23T06:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ptmu1g/career_advicesuccess_from_new_grad_to_agentic_ai/ | roshan_harry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptmu1g | false | null | t3_1ptmu1g | /r/LocalLLaMA/comments/1ptmu1g/career_advicesuccess_from_new_grad_to_agentic_ai/ | false | false | self | 0 | null |
HLE - Countdown Retro Style | 4 | 2025-12-23T05:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ptmdvc/hle_countdown_retro_style/ | redlikeazebra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptmdvc | false | null | t3_1ptmdvc | /r/LocalLLaMA/comments/1ptmdvc/hle_countdown_retro_style/ | false | false | 4 | null | ||
GLM 4.7 top the chart at Rank #6 in WebDev | 136 | [https://huggingface.co/zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) | 2025-12-23T05:43:37 | https://www.reddit.com/gallery/1ptm3n4 | GeLaMi-Speaker | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ptm3n4 | false | null | t3_1ptm3n4 | /r/LocalLLaMA/comments/1ptm3n4/glm_47_top_the_chart_at_rank_6_in_webdev/ | false | false | 136 | null | |
Is MiniMax M2 a worthy general-purpose model for its size? | 13 | Seems very competent for coding. There are some references around to it being used as a general-purpose model, but is it competitive in its size tier? (Mainly vs Qwen3-235B I suppose) | 2025-12-23T05:20:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ptloq5/is_minimax_m2_a_worthy_generalpurpose_model_for/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptloq5 | false | null | t3_1ptloq5 | /r/LocalLLaMA/comments/1ptloq5/is_minimax_m2_a_worthy_generalpurpose_model_for/ | false | false | self | 13 | null |
Clusters of Spare Junk | 1 | I've seen a lot of stuff about running Exo on Mac clusters to run huge models but has anyone tried using it to network random spare devices? I've got a number of old gaming laptops that have GPUs that are otherwise just sitting around. It's would be nice to pool together my desktop card and some older computers to be able to run some bigger models without investing in new hardware. I'm wondering if anyone has done this and actually achieved usable results. Alternatively, have any of you found uses for these older machines? | 2025-12-23T04:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ptkz7d/clusters_of_spare_junk/ | spudzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ptkz7d | false | null | t3_1ptkz7d | /r/LocalLLaMA/comments/1ptkz7d/clusters_of_spare_junk/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.