title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local Video-to-Text Pipeline on Apple Silicon (Whisper + Qwen2.5-VL) - Optimized for 8GB/16GB RAM | 14 | Hi everyone,
I wanted to share a Python script I built to convert video files into a rich text context suitable for RAG (Retrieval Augmented Generation).
My goal was to process videos locally on my Mac without sending data to the cloud, and crucially, to make it run on machines with limited RAM (like base M1/M2/M3 Airs) without crashing.
**🚀 How it works (The "Smart" Pipeline):**
1. **Scene Detection (OpenCV):** Instead of analyzing every frame (which is slow and redundant), the script detects visual scene changes based on pixel variance. It grabs **one representative frame** per scene.
2. **Audio Transcription (Whisper):** Extracts the full transcript with timestamps.
3. **RAM Optimization (Garbage Collection):** The script runs Whisper first, **unloads it from memory**, forces garbage collection, and only thenloads the Vision model (Qwen). This prevents OOM errors on 8GB/16GB Macs.
4. **Visual Captioning (Qwen2.5-VL-2B-Instruct-4bit):** It uses the mlx-vlm library to describe the representative frame of each scene using a customizable prompt.
**✨ Key Features:**
* **Fully Local:** No API keys, no cloud.
* **Efficient:** Doesn't waste compute on identical frames.
* **Structured Output:** Generates a clean .txt file with global context, audio transcript, and chronological visual descriptions.
* **Customizable:** You can change the prompt (e.g., "Describe the emotions", "Read the text on screen").
**🛠️ Usage & Requirements**
**Dependencies:**
You need ffmpeg installed (for Whisper) and the Python libs:
code Bash
brew install ffmpeg
pip install opencv-python numpy pillow mlx-vlm openai-whisper torch
**Running the script:**
code Bash
# Standard usage
python video_rag.py video.mp4
# Advanced (Custom prompt + Whisper Large)
python video_rag.py meeting.mp4 --whisper-model large-v3 --prompt "Describe the charts on the slide."
**🧪 Request for M4 / M4 Pro Users**
I am currently running this on older Apple Silicon. If anyone here has an **M4 or M4 Pro**, I would love to hear your feedback on the inference speed (tokens/sec) for the Qwen-VL part via MLX!
**📂 The Code (video\_rag.py)**
code Python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import gc
import cv2
import re
import time
import argparse
from pathlib import Path
import numpy as np
from PIL import Image
# MLX / Qwen-VL
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
# Whisper
import whisper
# --------- CONFIG QWEN / MLX ---------
MODEL_PATH = "mlx-community/Qwen3-VL-2B-Instruct-4bit"
RESIZE_DIM = (384, 384)
PREFIXES_A_SUPPRIMER = [
"cette image montre", "l'image montre", "sur cette image", "dans cette image",
"voici", "c'est", "je vois", "je peux voir", "il y a", "on voit", "une vue de"
]
# --------- CHARGEMENT DES MODÈLES ---------
def load_qwen_model():
print(f"⬇️ Chargement du modèle VLM : {MODEL_PATH}...")
model, processor = load(MODEL_PATH, trust_remote_code=True)
config = load_config(MODEL_PATH)
print("✅ Qwen3-VL chargé.")
return model, processor, config
def load_whisper_model(name: str):
print(f"⬇️ Chargement du modèle Whisper : {name}...")
model = whisper.load_model(name)
print(f"✅ Whisper {name} chargé.")
return model
# --------- UTILITAIRES TEXTE / TEMPS ---------
def clean_caption(raw_text: str) -> str:
cleaned = raw_text.strip()
if not cleaned:
return ""
lower_clean = cleaned.lower()
# évite les réponses du genre "désolé..."
if "désolé" in lower_clean or "sorry" in lower_clean:
return ""
for prefix in PREFIXES_A_SUPPRIMER:
if lower_clean.startswith(prefix):
cleaned = cleaned[len(prefix):]
lower_clean = cleaned.lower()
cleaned = re.sub(
r"^(que\s|qu'|:|,|\.|je vois)\s*",
"",
cleaned,
flags=re.IGNORECASE,
).strip()
# coupe à la première ponctuation forte depuis la fin
m = re.search(r"[\.!?]", cleaned[::-1])
if m:
end_pos = len(cleaned) - m.start()
cleaned = cleaned[:end_pos]
cleaned = cleaned.strip()
if not cleaned:
return ""
return cleaned[0].upper() + cleaned[1:]
def format_time_str(t_sec: float) -> str:
minutes = int(t_sec // 60)
seconds = int(t_sec % 60)
return f"{minutes:02d}:{seconds:02d}"
# --------- FEATURES POUR SCÈNES ---------
def compute_frame_feature(frame_bgr) -> np.ndarray:
"""
Crée une empreinte simple de l'image pour la détection de scènes.
-> grayscale, resize 64x64, vector 0–1.
"""
gray = cv2.cvtColor(frame_bgr, cv2.COLOR_BGR2GRAY)
small = cv2.resize(gray, (64, 64))
vec = small.astype("float32") / 255.0
return vec.flatten()
# --------- PASS 1 : DÉTECTION DE SCÈNES (SANS QWEN) ---------
def detect_scenes(video_path: str,
sample_fps: float = 1.0,
scene_threshold: float = 0.20):
"""
Passe 1 : on parcourt la vidéo à sample_fps (ex: 1 image/s),
on calcule un feature par frame, et on détecte les changements
de scène selon un seuil de différence moyenne.
Retourne :
- scenes_raw : liste de dicts { "start_sec", "end_sec" }
- duration_sec : durée approx de la vidéo
"""
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
raise RuntimeError(f"Impossible d'ouvrir la vidéo : {video_path}")
base_fps = cap.get(cv2.CAP_PROP_FPS)
if base_fps <= 0:
base_fps = 25.0
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
duration_sec = total_frames / base_fps if total_frames > 0 else 0
frame_interval = max(1, int(round(base_fps / sample_fps)))
print(f"[SCENES] FPS vidéo ≈ {base_fps:.2f}")
print(f"[SCENES] Frames totales : {total_frames}")
print(f"[SCENES] Durée approx : {duration_sec:.1f} s")
print(f"[SCENES] Échantillonnage à {sample_fps} img/s => intervalle {frame_interval} frames")
print(f"[SCENES] Seuil de scène : {scene_threshold}")
scenes_raw = []
last_feat = None
current_start_sec = None
prev_t_sec = None
frame_idx = 0
while True:
ret, frame = cap.read()
if not ret:
break
if frame_idx % frame_interval != 0:
frame_idx += 1
continue
t_sec = frame_idx / base_fps
feat = compute_frame_feature(frame)
if last_feat is None:
# première frame
current_start_sec = t_sec
prev_t_sec = t_sec
last_feat = feat
else:
diff = float(np.mean(np.abs(feat - last_feat)))
if diff > scene_threshold:
# clôture de la scène précédente
scenes_raw.append({
"start_sec": current_start_sec,
"end_sec": prev_t_sec,
})
# nouvelle scène
current_start_sec = t_sec
prev_t_sec = t_sec
last_feat = feat
frame_idx += 1
# clôture de la dernière scène
if current_start_sec is not None:
end_sec = duration_sec if duration_sec > 0 else prev_t_sec
scenes_raw.append({
"start_sec": current_start_sec,
"end_sec": end_sec,
})
cap.release()
print(f"[SCENES] Nombre de scènes détectées : {len(scenes_raw)}")
for i, sc in enumerate(scenes_raw, start=1):
print(f" SCENE {i}: {format_time_str(sc['start_sec'])} - {format_time_str(sc['end_sec'])}")
return scenes_raw, duration_sec
# --------- PASS 2 : QWEN SUR UNE FRAME REPRÉSENTATIVE PAR SCÈNE ---------
def grab_frame_at_time(video_path: str, t_sec: float):
"""
Récupère une frame à t_sec (en secondes).
"""
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
raise RuntimeError(f"Impossible d'ouvrir la vidéo : {video_path}")
cap.set(cv2.CAP_PROP_POS_MSEC, t_sec * 1000.0)
ret, frame = cap.read()
cap.release()
if not ret:
return None
return frame
def describe_scene_qwen(model, processor, config,
video_path: str,
start_sec: float,
end_sec: float,
max_tokens: int,
prompt: str):
"""
Choisit un temps représentatif (milieu de la scène),
récupère la frame correspondante et la donne à Qwen-VL.
"""
rep_sec = (start_sec + end_sec) / 2.0
frame = grab_frame_at_time(video_path, rep_sec)
if frame is None:
return None
small_frame = cv2.resize(frame, RESIZE_DIM)
frame_rgb = cv2.cvtColor(small_frame, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(frame_rgb)
formatted_prompt = apply_chat_template(
processor, config, prompt, num_images=1
)
output = generate(
model,
processor,
formatted_prompt,
pil_image,
max_tokens=max_tokens,
verbose=False,
repetition_penalty=1.05,
temp=0.0,
)
if hasattr(output, "text"):
raw_text = output.text
else:
raw_text = str(output)
cleaned = clean_caption(raw_text)
if not cleaned:
return None
return cleaned
def describe_all_scenes(model, processor, config,
video_path: str,
scenes_raw,
max_tokens: int,
prompt: str):
"""
Pour chaque scène brute (start_sec, end_sec),
appelle Qwen-VL UNE fois,
et retourne une liste de scènes enrichies :
{
"start_sec": ...,
"end_sec": ...,
"start_str": "MM:SS",
"end_str": "MM:SS",
"caption": "..."
}
"""
scenes = []
t0 = time.time()
for idx, sc in enumerate(scenes_raw, start=1):
start_sec = sc["start_sec"]
end_sec = sc["end_sec"]
print(f"[VLM-SCENE] SCENE {idx} => {format_time_str(start_sec)} - {format_time_str(end_sec)}")
caption = describe_scene_qwen(
model,
processor,
config,
video_path,
start_sec,
end_sec,
max_tokens=max_tokens,
prompt=prompt,
)
if caption is None:
caption = "(Description indisponible)"
scene_entry = {
"start_sec": start_sec,
"end_sec": end_sec,
"start_str": format_time_str(start_sec),
"end_str": format_time_str(end_sec),
"caption": caption,
}
print(" ->", caption)
scenes.append(scene_entry)
print(f"[VLM-SCENE] Temps total VLM scènes : {time.time() - t0:.1f} s")
return scenes
# --------- WHISPER ---------
def transcribe_audio_whisper(whisper_model, video_path: str, language: str | None = None) -> dict:
"""
Transcrit directement la vidéo (Whisper utilise ffmpeg en interne).
Retourne l'objet complet (avec segments).
"""
print("[WHISPER] Transcription en cours...")
t0 = time.time()
result = whisper_model.transcribe(video_path, language=language)
print(f"[WHISPER] Transcription terminée en {time.time() - t0:.1f} s")
return result
# --------- CONSTRUCTION DU TEXTE FINAL ---------
def build_output_text(transcription: dict,
scenes,
video_path: str,
duration_sec: float) -> str:
lines = []
lines.append("### CONTEXTE VIDEO POUR LLM (UTF-8)\n")
lines.append(f"Fichier vidéo d'origine : {video_path}")
lines.append(f"Durée approximative : {duration_sec:.1f} secondes\n")
# --- SECTION 0 : description globale approximative ---
lines.append("SECTION 0 : DESCRIPTION GLOBALE (à partir des scènes)\n")
if scenes:
first = scenes[0]
mid = scenes[len(scenes) // 2]
last = scenes[-1]
lines.append(f"- Début [{first['start_str']} - {first['end_str']}]: {first['caption']}")
if mid is not first and mid is not last:
lines.append(f"- Milieu [{mid['start_str']} - {mid['end_str']}]: {mid['caption']}")
lines.append(f"- Fin [{last['start_str']} - {last['end_str']}]: {last['caption']}")
else:
lines.append("(Aucune scène détectée.)")
lines.append("")
# --- SECTION 1 : transcription audio ---
lines.append("SECTION 1 : TRANSCRIPTION AUDIO (Whisper)\n")
full_text = transcription.get("text", "").strip()
lines.append("TEXTE COMPLET :")
lines.append(full_text if full_text else "(Transcription vide ou indisponible.)")
lines.append("")
if "segments" in transcription:
lines.append("SEGMENTS HORODATES :")
for seg in transcription["segments"]:
start = seg.get("start", 0.0)
end = seg.get("end", 0.0)
txt = seg.get("text", "").strip()
m1, s1 = divmod(int(start), 60)
m2, s2 = divmod(int(end), 60)
lines.append(f"[{m1:02d}:{s1:02d} - {m2:02d}:{s2:02d}] {txt}")
lines.append("")
# --- SECTION 2 : scènes visuelles décrites ---
lines.append("SECTION 2 : SCENES VISUELLES (Qwen3-VL, 1 description par scène)\n")
if not scenes:
lines.append("(Aucune scène disponible.)")
else:
for idx, sc in enumerate(scenes, start=1):
lines.append(f"SCENE {idx} [{sc['start_str']} - {sc['end_str']}]")
lines.append(f"- Description : {sc['caption']}")
lines.append("")
lines.append("\nFIN DU CONTEXTE.\n")
return "\n".join(lines)
# --------- MAIN ---------
def main():
parser = argparse.ArgumentParser(
description="Analyse vidéo V3.1 : détection de scènes + Whisper + Qwen3-VL (1 description par scène)."
)
parser.add_argument("video", help="Chemin de la vidéo (ex: .mp4, .mov iPhone, etc.)")
parser.add_argument("--sample-fps", type=float, default=1.0,
help="FPS d'échantillonnage pour détecter les scènes (défaut: 1.0)")
parser.add_argument("--scene-threshold", type=float, default=0.20,
help="Seuil de changement de scène (différence moyenne 0-1, défaut: 0.20)")
parser.add_argument("--whisper-model", type=str, default="small",
help="Modèle Whisper: small, medium, large-v3, etc. (défaut: small)")
parser.add_argument("--whisper-lang", type=str, default=None,
help="Code langue (ex: 'fr'), ou None pour auto-détection.")
parser.add_argument("--max-tokens", type=int, default=60,
help="Max tokens générés par Qwen-VL par scène (défaut: 60)")
parser.add_argument(
"--prompt",
type=str,
default=(
"Décris factuellement ce qui est présent dans l'image en français. "
"Sois direct et précis, sans interprétation inutile."
),
help="Prompt de description pour Qwen-VL (défaut: description factuelle en français)."
)
parser.add_argument("--out", type=str, default="contexte_video_v3_1.txt",
help="Fichier texte de sortie (UTF-8).")
args = parser.parse_args()
video_path = os.path.abspath(args.video)
if not os.path.exists(video_path):
raise FileNotFoundError(f"Vidéo introuvable : {video_path}")
# 1) Détection de scènes (rapide, sans modèles)
scenes_raw, duration_sec = detect_scenes(
video_path,
sample_fps=args.sample_fps,
scene_threshold=args.scene_threshold,
)
# 2) Whisper d'abord (audio)
model_whisper = load_whisper_model(args.whisper_model)
transcription = transcribe_audio_whisper(
model_whisper,
video_path,
language=args.whisper_lang
)
# 🔥 Libère Whisper de la RAM
del model_whisper
gc.collect()
# 3) Puis Qwen-VL (vision)
model_vlm, processor_vlm, config_vlm = load_qwen_model()
# 4) Description de chaque scène (1 frame représentative)
scenes = describe_all_scenes(
model_vlm,
processor_vlm,
config_vlm,
video_path,
scenes_raw,
max_tokens=args.max_tokens,
prompt=args.prompt,
)
# 5) Construction du texte final
output_text = build_output_text(
transcription,
scenes,
video_path,
duration_sec,
)
out_path = Path(args.out)
out_path.write_text(output_text, encoding="utf-8")
print(f"\n✅ Fichier contexte V3.1 généré : {out_path}")
print(" Tu peux maintenant copier/coller ce fichier dans Open WebUI ou LM Studio (RAG).")
if __name__ == "__main__":
main() | 2025-11-27T13:57:45 | https://www.reddit.com/r/LocalLLaMA/comments/1p82u5k/local_videototext_pipeline_on_apple_silicon/ | Longjumping-Elk-7756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p82u5k | false | null | t3_1p82u5k | /r/LocalLLaMA/comments/1p82u5k/local_videototext_pipeline_on_apple_silicon/ | false | false | self | 14 | null |
Never been a better time, to learn to write a good rhyme! | 51 | Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models https://arxiv.org/abs/2511.15304 | 2025-11-27T13:48:02 | lakySK | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p82mg3 | false | null | t3_1p82mg3 | /r/LocalLLaMA/comments/1p82mg3/never_been_a_better_time_to_learn_to_write_a_good/ | false | false | default | 51 | {'enabled': True, 'images': [{'id': 'g6nc02nc0t3g1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?width=108&crop=smart&auto=webp&s=13549fbdd20e86a7cf06929f7146468073fdd4c8', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?width=216&crop=smart&auto=webp&s=428a4ff7be6a23139db4d397fe2efb2ad848ade9', 'width': 216}, {'height': 295, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?width=320&crop=smart&auto=webp&s=dfb48f27ffa8031be69d65128c69616692f4b3cd', 'width': 320}, {'height': 591, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?width=640&crop=smart&auto=webp&s=d98ddd988232c35646210a0f68c0b2836986a310', 'width': 640}, {'height': 886, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?width=960&crop=smart&auto=webp&s=9fec59f2de7998375e69a2b73d62aec28264a51b', 'width': 960}, {'height': 997, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?width=1080&crop=smart&auto=webp&s=d8fe9bab0dd1268f3187a3b5ccc651c713724173', 'width': 1080}], 'source': {'height': 1099, 'url': 'https://preview.redd.it/g6nc02nc0t3g1.jpeg?auto=webp&s=aeae58d99110ca5537f08cad0953c4f12734b1c3', 'width': 1190}, 'variants': {}}]} | |
Mac M3 ultra 512gb setup | 7 | I was given the Mac machine for local code development in a gapped network, before inserting the machine, I can set it up as ever as I want.
I usually use vscode and Kline + open router.
What should I do to work with a local model and which one should I install (and how to use it)? | 2025-11-27T13:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p82l1a/mac_m3_ultra_512gb_setup/ | ZedXT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p82l1a | false | null | t3_1p82l1a | /r/LocalLLaMA/comments/1p82l1a/mac_m3_ultra_512gb_setup/ | false | false | self | 7 | null |
deepseek ocr swift port | 5 | Maybe another AI slop, but as long as I can download the binary and run it successfully, I am happy :)
[https://github.com/mzbac/deepseek-ocr.swift](https://github.com/mzbac/deepseek-ocr.swift) | 2025-11-27T13:42:23 | https://www.reddit.com/r/LocalLLaMA/comments/1p82i6t/deepseek_ocr_swift_port/ | Tiny_Judge_2119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p82i6t | false | null | t3_1p82i6t | /r/LocalLLaMA/comments/1p82i6t/deepseek_ocr_swift_port/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?width=108&crop=smart&auto=webp&s=01824b71a0e87b2ec892fe910f261bde745d2b75', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?width=216&crop=smart&auto=webp&s=00814668971850b36c606a921711dfd747d81ac5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?width=320&crop=smart&auto=webp&s=dd49cbf308fb230c0281b0d34a390d4496744340', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?width=640&crop=smart&auto=webp&s=579ad98e473e9db90d8c99897511dce068c33c8a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?width=960&crop=smart&auto=webp&s=6f7692effbb9354b323349047c284f41d74c1a70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?width=1080&crop=smart&auto=webp&s=a13c2c50e9771930b69ad057edb41bba550639d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dXCp6qk9O5MEyMaMtN7XGl17kubRG2eptRosnjfKRNk.png?auto=webp&s=0149e6a412f5c6ac674b2338a57f6e32bcdfab4c', 'width': 1200}, 'variants': {}}]} |
Huawei introduced a new optimizer for LLM training | 1 | [removed] | 2025-11-27T13:38:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p82fjz/huawei_introduced_a_new_optimizer_for_llm_training/ | cool_joker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p82fjz | false | null | t3_1p82fjz | /r/LocalLLaMA/comments/1p82fjz/huawei_introduced_a_new_optimizer_for_llm_training/ | false | false | self | 1 | null |
Huawei introduced a new optimizer for LLM training, called ROOT | 1 | [removed] | 2025-11-27T13:34:31 | https://www.reddit.com/r/LocalLLaMA/comments/1p82c8a/huawei_introduced_a_new_optimizer_for_llm/ | cool_joker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p82c8a | false | null | t3_1p82c8a | /r/LocalLLaMA/comments/1p82c8a/huawei_introduced_a_new_optimizer_for_llm/ | false | false | 1 | null | |
Best local LLM for everyday questions & step-by-step tutoring (36GB Unified RAM)? | 1 | Hey everyone,
I’m currently running **qwen3-code-30b** locally for coding tasks (open to suggestions for a coding model too!)
Now I’m looking for a second local model that’s better at being a “teacher” something I can use for:
Normal everyday questions
* Studying new programming concepts
* Explaining things step by step
* Walking through examples slowly like a real tutor | 2025-11-27T13:29:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p828e5/best_local_llm_for_everyday_questions_stepbystep/ | VirusMinus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p828e5 | false | null | t3_1p828e5 | /r/LocalLLaMA/comments/1p828e5/best_local_llm_for_everyday_questions_stepbystep/ | false | false | self | 1 | null |
Headsup: For multiple inferences, it is purposeful to reinforce (reward, credit, give approval of) good LLM responses. | 0 | I'm sure many of you are aware of this, but I though I'd mention it.
LLM training; finetuning, but also pretraining, is based on actual human dialogue. In human dialogue, we have cues that we use to "give props" to the conversation partner, even during a work in progress: "Great, now .." or "This looks good, how about if we .."
You may already do this naturally in LLM conversations, or you avoid it, because it seems silly to boost the confidence of some silicon wafers.
However, adding "Great, now .." or "This looks good, .." "Great work! .." will statistically "bring in" the human conversations using these or similar terms. It will thus mimic even more closely those types of conversation from training where there is a collaborative tone and progressive and iterative focus.
Doing this, the conversation will narrow in on a more dialog and collaboration-oriented approach. It will also narrow the reply scope in the next inference, and may reduce the amount of fluff and open-endedness in the LLMs response. It will ensure that the LLM will reuse originally used content.
This is good not only to avoid the amount of headache and fluff, but I think it is also potentially measurable as a token saver, which is useful for local inferences. Maybe someone can measure this?
**Note 1:** Do not use "great" or "awesome" in a ritual sense or as convo filler, like you guys over the Atlantic (Americans) may tend to do. Use it to signal that the previous answer was on target, we're moving in the right way - lets keep on target.
**Note 2:** The length and elaboration of the "reward" you give to the LLM may alter the quality of this effect. Longer "reward" isn't neccessarily better. | 2025-11-27T13:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p8261x/headsup_for_multiple_inferences_it_is_purposeful/ | partysnatcher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8261x | false | null | t3_1p8261x | /r/LocalLLaMA/comments/1p8261x/headsup_for_multiple_inferences_it_is_purposeful/ | false | false | self | 0 | null |
Yes it is possible to uncensor gpt-oss-20b - ArliAI/gpt-oss-20b-Derestricted | 401 | Original discussion on the initial **[Arli AI](https://www.arliai.com)** created GLM-4.5-Air-Derestricted model that was ablated using u/grimjim's new ablation method is here: [The most objectively correct way to abliterate so far - ArliAI/GLM-4.5-Air-Derestricted ](https://www.reddit.com/r/LocalLLaMA/comments/1p5epot/the_most_objectively_correct_way_to_abliterate_so/)
(Note: Derestricted is a name given to models created by Arli AI using this method, but the method officially is just called Norm-Preserving Biprojected Abliteration by u/grimjim)
Hey everyone, Owen here from **[Arli AI](https://www.arliai.com)** again. In my previous post, I got a lot of requests to attempt this derestricting on OpenAI's gpt-oss models as they are models that are intelligent but was infamous for being very...restricted.
I thought that it would be a big challenge and be interesting to try and attempt as well, and so that was the next model I decided to try and derestrict next. The 120b version is more unwieldy to transfer around and load in/out of VRAM/RAM as I was experimenting, so I started with the 20b version first but I will get to the 120b next which should be super interesting.
As for the 20b model here, it seems to have worked! The model now can respond to questions that OpenAI never would have approved of answering (lol!). It also seems to have cut down its wasteful looping around of deciding whether it can or cannot answer a question based on a non existent policy in it's reasoning, although this isn't completely removed yet. I suspect a more customized harmful/harmless dataset to specifically target this behavior might be useful for this, so that will be what I need to work on.
Otherwise I think this is just an outright improved model over the original as it is much more useful now than it's original behavior. Where it would usually flag a lot of false positives and be absolutely useless in certain situations just because of "safety".
In order to work on modifying the weights of the model, I also had to use a BF16 converted version to start with as the model as you all might know was released in MXFP4 format, but then attempting the ablation on the BF16 converted model seems to work well. I think that this proves that this new method of essentially "direction-based" abliteration is really flexible and works super well for probably any models.
As for quants, I'm not one to worry about making GGUFs myself because I'm sure the GGUF makers will get to it pretty fast and do a better job than I can. Also, there are no FP8 or INT8 quants now because its pretty small and those that run FP8 or INT8 quants usually have a substantial GPU setup anyways.
Try it out and have fun! This time it's really for r/locallama because we don't even run this model on our API service. | 2025-11-27T12:56:59 | https://huggingface.co/ArliAI/gpt-oss-20b-Derestricted | Arli_AI | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p81k2z | false | null | t3_1p81k2z | /r/LocalLLaMA/comments/1p81k2z/yes_it_is_possible_to_uncensor_gptoss20b/ | false | false | default | 401 | {'enabled': False, 'images': [{'id': '8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=108&crop=smart&auto=webp&s=be9e3b64df19446e657f5c0c371e7f673cf90f09', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=216&crop=smart&auto=webp&s=909735014d3ef150508750c5ebd729afe3018197', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=320&crop=smart&auto=webp&s=3f01ac42ede5554c32f8f615b2340d5cd4787b5f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=640&crop=smart&auto=webp&s=9334318d3d29cfd953050dfdf981bc10db9cc00b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=960&crop=smart&auto=webp&s=043531a76a5022978186bbf04c5e077e2c6e1c35', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=1080&crop=smart&auto=webp&s=75dd4e66bdeb9ca1e849b22d0041dd970d4d1d3e', 'width': 1080}], 'source': {'height': 1307, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?auto=webp&s=df71667a1fd89baeffaad5afe48d165fab20138a', 'width': 1306}, 'variants': {}}]} |
CodeModeToon | 0 | I built an MCP workflow orchestrator after hitting context limits on SRE automation
**Background**: I'm an SRE who's been using Claude/Codex for infrastructure work (K8s audits, incident analysis, research). The problem: multi-step workflows generate huge JSON blobs that blow past context windows.
**What I built**: CodeModeTOON - an MCP server that lets you define workflows (think: "audit this cluster", "analyze these logs", "research this library") instead of chaining individual tool calls.
**Example workflows included:**
- `k8s-detective`: Scans pods/deployments/services, finds security issues, rates severity
- `post-mortem`: Parses logs, clusters patterns, finds anomalies
- `research`: Queries multiple sources in parallel (Context7, Perplexity, Wikipedia), optional synthesis
**The compression part**: Uses TOON encoding on results. Gets ~83% savings on structured data (K8s manifests, log dumps), but only ~4% on prose. Mostly useful for keeping large datasets in context.
**limitations:**
- Uses Node's `vm` module (not for multi-tenant prod)
- Compression doesn't help with unstructured text
- Early stage, some rough edges
I've been using it daily in my workflows and it's been solid so far. Feedback is very appreciated—especially curious how others are handling similar challenges with AI + infrastructure automation.
MIT licensed: https://github.com/ziad-hsn/code-mode-toon
Inspired by Anthropic and Cloudflare's posts on the "context trap" in agentic workflows:
- https://blog.cloudflare.com/code-mode/
- https://www.anthropic.com/engineering/code-execution-with-mcp
| 2025-11-27T12:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p819bn/codemodetoon/ | Ok_Tower6756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p819bn | false | null | t3_1p819bn | /r/LocalLLaMA/comments/1p819bn/codemodetoon/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?width=108&crop=smart&auto=webp&s=7dcd4ebc28927db541df7c151fecbb020e8efb44', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?width=216&crop=smart&auto=webp&s=d369b7c0282d8b0d7484752af996ca7e1c299045', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?width=320&crop=smart&auto=webp&s=7ed00c64ba480c3e94c75d4fb1571dbe74df925a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?width=640&crop=smart&auto=webp&s=ca5b99fcc501c24af8f9e500ac4ad44f0678eedc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?width=960&crop=smart&auto=webp&s=cf1196968aa4c83f86b95e8138f88e347e4cfb80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?width=1080&crop=smart&auto=webp&s=a7b5888a297416eef2615e59a591a7b6d011cb78', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hAXkf4D0SrT536lJpRYL1a0WZPDSIS_gK0F_-dkIsaQ.png?auto=webp&s=4b70163e4e80186c8904d7b55ae4c6303272a13f', 'width': 1200}, 'variants': {}}]} |
I built a free tool to convert JSON to TOON (Token-Oriented Object Notation) to save ~50% on LLM context tokens | 0 | > | 2025-11-27T12:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p80s91/i_built_a_free_tool_to_convert_json_to_toon/ | Anilpeter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p80s91 | false | null | t3_1p80s91 | /r/LocalLLaMA/comments/1p80s91/i_built_a_free_tool_to_convert_json_to_toon/ | false | false | self | 0 | null |
DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning | 65 | 2025-11-27T12:12:01 | https://huggingface.co/deepseek-ai/DeepSeek-Math-V2 | hedgehog0 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p80png | false | null | t3_1p80png | /r/LocalLLaMA/comments/1p80png/deepseekmathv2_towards_selfverifiable/ | false | false | default | 65 | {'enabled': False, 'images': [{'id': 'NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=108&crop=smart&auto=webp&s=7f8963226f5d266a1d2c6ee490749900557447b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=216&crop=smart&auto=webp&s=2271e81304c5c61c38e77f4804f7d5bfafb92f4d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=320&crop=smart&auto=webp&s=91f302ebd9cfe99975d248bc9c3e3674947c4a18', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=640&crop=smart&auto=webp&s=91ce1e1d706821a2296ebf8e620933cc5aa91b2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=960&crop=smart&auto=webp&s=528b2cf3b5d5f579f8cae268a6e1a67ec320814a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=1080&crop=smart&auto=webp&s=7e314c446eb2ae4f79570af2ebcc45980911ef0c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?auto=webp&s=84bfc2521b197e1d51d66f867317440e48b7a5ad', 'width': 1200}, 'variants': {}}]} | |
Canvas in the Gemini app is really frustrating. Gemini 3.0 just ignores my instructions when I ask for a word-for-word transcription of a PDF | 0 | Gemini 3.0 day by day doesn't look like a big improvement over gemini 2.5 . A pretty minor improvement, and it even gets worse in many tasks. | 2025-11-27T12:05:55 | https://www.reddit.com/r/LocalLLaMA/comments/1p80lsc/canvas_in_the_gemini_app_is_really_frustrating/ | Longjumping_Fly_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p80lsc | false | null | t3_1p80lsc | /r/LocalLLaMA/comments/1p80lsc/canvas_in_the_gemini_app_is_really_frustrating/ | false | false | self | 0 | null |
How to analyse text for 5W | 0 | Based on provided content in three files 20 paragraphs each, would like to generate answer to 5W (who what where when why how). Any tips of existing platform for this that does not make stuff up, and by request can find additional sources on the web to add more 5W info? Thanks | 2025-11-27T12:05:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p80l7x/how_to_analyse_text_for_5w/ | grys | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p80l7x | false | null | t3_1p80l7x | /r/LocalLLaMA/comments/1p80l7x/how_to_analyse_text_for_5w/ | false | false | self | 0 | null |
Hardware benchmark | 0 | At work we bought a small NPU unit and we want to see how much is slower or faster in comparison to a customer machine running 2 5090, instead to set the same environment, same model etc, is there something to quick check the differences? | 2025-11-27T12:02:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p80j7t/hardware_benchmark/ | deepsky88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p80j7t | false | null | t3_1p80j7t | /r/LocalLLaMA/comments/1p80j7t/hardware_benchmark/ | false | false | self | 0 | null |
Paper page - NVIDIA Nemotron Parse 1.1 | 43 | More OCR!
"We introduce Nemotron-Parse-1.1, a lightweight document parsing and OCR model that advances the capabilities of its predecessor, Nemoretriever-Parse-1.0. Nemotron-Parse-1.1 delivers improved capabilities across general OCR, markdown formatting, structured table parsing, and text extraction from pictures, charts, and diagrams. It also supports a longer output sequence length for visually dense documents. As with its predecessor, it extracts bounding boxes of text segments, as well as corresponding semantic classes. Nemotron-Parse-1.1 follows an encoder-decoder architecture with 885M parameters, including a compact 256M-parameter language decoder. It achieves competitive accuracy on public benchmarks making it a strong lightweight OCR solution. We release the model weights publicly on Huggingface, as well as an optimized NIM container, along with a subset of the training data as part of the broader Nemotron-VLM-v2 dataset. Additionally, we release Nemotron-Parse-1.1-TC which operates on a reduced vision token length, offering a 20% speed improvement with minimal quality degradation." | 2025-11-27T11:50:47 | https://huggingface.co/papers/2511.20478 | ab2377 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p80bw7 | false | null | t3_1p80bw7 | /r/LocalLLaMA/comments/1p80bw7/paper_page_nvidia_nemotron_parse_11/ | false | false | default | 43 | {'enabled': False, 'images': [{'id': '2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?width=108&crop=smart&auto=webp&s=2ba45db7caa196d06be1a45141e0bb291a1f9025', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?width=216&crop=smart&auto=webp&s=5ff35cbe2aec85d2cfe47a5db22ecc810fbfd507', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?width=320&crop=smart&auto=webp&s=ae48d6d9c8cbb42e403d6189f297dff34422dbde', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?width=640&crop=smart&auto=webp&s=5420a7b266ae9ecbe02f1c320631b2ab2fec3c4b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?width=960&crop=smart&auto=webp&s=b10e809da313c4d4063df36402cf3b91fee65f5b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?width=1080&crop=smart&auto=webp&s=7d06a295bcd70a4f65daee5e6fa9f138236a13e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2qpOcYaS2j2UQIHcV7jQDC__U_Q0iB2Xn13ee-Dp1dU.png?auto=webp&s=ce326ced4192ed5af5e37698c064b6d086c4e3e5', 'width': 1200}, 'variants': {}}]} |
Deepseek Math V2 - best open-source math model | 8 | [https://huggingface.co/deepseek-ai/DeepSeek-Math-V2](https://huggingface.co/deepseek-ai/DeepSeek-Math-V2) | 2025-11-27T11:35:29 | Which_Pound_6751 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p802i7 | false | null | t3_1p802i7 | /r/LocalLLaMA/comments/1p802i7/deepseek_math_v2_best_opensource_math_model/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'sjnyxhh1ds3g1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?width=108&crop=smart&auto=webp&s=6e486d33564f50a0fadc8ab64316bfc8730a577d', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?width=216&crop=smart&auto=webp&s=dc6bb3392897432b9bc9c679d0badb42b4005f0e', 'width': 216}, {'height': 80, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?width=320&crop=smart&auto=webp&s=703717ab0c1b012b033b494b9b233009edd74e8b', 'width': 320}, {'height': 160, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?width=640&crop=smart&auto=webp&s=3baadb18d297c7c44fa03c15a260dc6b124ece9c', 'width': 640}, {'height': 240, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?width=960&crop=smart&auto=webp&s=8d70eff76eaba5a6b0946e85809bae71c794b0c1', 'width': 960}, {'height': 270, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?width=1080&crop=smart&auto=webp&s=a2509089544e5731dcb4a787cf1ebf8e7c706cf7', 'width': 1080}], 'source': {'height': 1749, 'url': 'https://preview.redd.it/sjnyxhh1ds3g1.png?auto=webp&s=7a889a51a58a4a7b8021191d43b548431eaaa7d7', 'width': 6992}, 'variants': {}}]} | |
How do I know if this model will work well on my PC? | 0 | I see many models on HuggingFace, but I don't know if a specific model will work well on my computer. How can I find out the specifications of a model? | 2025-11-27T11:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p801qg/how_do_i_know_if_this_model_will_work_well_on_my/ | Hot-Necessary-4945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p801qg | false | null | t3_1p801qg | /r/LocalLLaMA/comments/1p801qg/how_do_i_know_if_this_model_will_work_well_on_my/ | false | false | self | 0 | null |
intel/linux-npu-driver: Intel® NPU (Neural Processing Unit) Driver | 8 | 2025-11-27T11:33:05 | https://github.com/intel/linux-npu-driver | Dontdoitagain69 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p8010i | false | null | t3_1p8010i | /r/LocalLLaMA/comments/1p8010i/intellinuxnpudriver_intel_npu_neural_processing/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': '26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?width=108&crop=smart&auto=webp&s=25083599369c37c63a81c260e7c6a319b028a884', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?width=216&crop=smart&auto=webp&s=199e2e20d2d32b1b47ad17a7733438fdf45a8683', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?width=320&crop=smart&auto=webp&s=7c75f708e1d38c74b42b351daa1ce299d82234fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?width=640&crop=smart&auto=webp&s=77a6e1e6c3f07fddb3c123e0a7b4aa75ce7a230e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?width=960&crop=smart&auto=webp&s=7f13c59a4e760987178b6b244ea460393282be32', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?width=1080&crop=smart&auto=webp&s=c54054ab25b4e567a596716db6792580702fea39', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/26k0p1BJHx2gqon8nqRDewfW78Lv0IpNSkjWsd3VMPY.png?auto=webp&s=0eb529bbbedc7cf75376d68b2361f6186052987d', 'width': 1200}, 'variants': {}}]} | |
PrimeIntellect / INTELLECT-3 (GLM 4.5 Air finetune) | 29 | **INTELLECT-3** is a 106B (A12B) parameter Mixture-of-Experts reasoning model post-trained from [GLM-4.5-Air-Base](https://huggingface.co/zai-org/GLM-4.5-Air-Base) using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL).
https://preview.redd.it/n0djc6r9as3g1.png?width=1172&format=png&auto=webp&s=bac28d8bf5b3dbdd1c2f8b175526b551b7c2a5ad
[https://huggingface.co/PrimeIntellect/INTELLECT-3](https://huggingface.co/PrimeIntellect/INTELLECT-3)
[https://huggingface.co/bartowski/PrimeIntellect\_INTELLECT-3-GGUF](https://huggingface.co/bartowski/PrimeIntellect_INTELLECT-3-GGUF)
| 2025-11-27T11:20:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p7zt7c/primeintellect_intellect3_glm_45_air_finetune/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7zt7c | false | null | t3_1p7zt7c | /r/LocalLLaMA/comments/1p7zt7c/primeintellect_intellect3_glm_45_air_finetune/ | false | false | 29 | null | |
Free JSON → TOON converter. Smaller structures. Fewer tokens. | 0 | If you’re burning money on OpenAI, Claude APIs or your local rig you’re probably wasting 30 to 60 percent of your tokens. TOON format fixes that.
We just released a free JSON to TOON converter that shows you the exact savings. No guessing. No signup.
Key points:
- Convert JSON to TOON and back, instantly
- Live token count comparison
- Cost savings calculator baked in
- Pre built templates for common structures
- 100 percent client side. Nothing leaves your browser
If your company spends around 10k per month on LLM APIs, TOON can cut 3 to 6k off that bill. The math becomes obvious once you run your own payloads through the tool.
Try it here: https://platinum.ai/tools/json-to-toon | 2025-11-27T11:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p7zqxw/free_json_toon_converter_smaller_structures_fewer/ | platinumai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7zqxw | false | null | t3_1p7zqxw | /r/LocalLLaMA/comments/1p7zqxw/free_json_toon_converter_smaller_structures_fewer/ | false | false | self | 0 | null |
Cooling RTX 5090 + 9950X when both run at full capacity using and Air Cooler? Is it possible? | 0 | PC will be used for Nvidia Omniverse workloads / research. There will be times when it will be running both a ray tracer / heavy renderer on the GPU and AI / Physics / FEM solvers on the CPU.
Can I cool them if I use a Noctua NH-D15 G2 or a Phantom Spirit 120 SE/EVO and 8 Arctic p12 pro fans?
Do I need to duct the CPU intake to PC case intake fan? Should I use a triple fan setup for the CPU?
Would it be better if I used an inverted case (GPU on top) like the Lian LI 11O Dynamic?
Is anyone running such a build and what is your experience?
The alternative would be a Corsair Titan 360 AIO / Arctic LF PRO 360/420 - but I really want to avoid having a water-cooling system in Computer and facing a possible catastrophic failure. | 2025-11-27T11:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1p7zpgp/cooling_rtx_5090_9950x_when_both_run_at_full/ | AdvancedCybernetics | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7zpgp | false | null | t3_1p7zpgp | /r/LocalLLaMA/comments/1p7zpgp/cooling_rtx_5090_9950x_when_both_run_at_full/ | false | false | self | 0 | null |
A structured prompting protocol to mitigate context entropy in long-session LLM coding tasks (tested on GPT-4, Claude, Gemini). | 0 | Hi everyone. Like many of you, I'm banging my head against the wall over memory degradation during long coding sessions. No matter how large the context window is, after N turns the model starts to hallucinate or lose initial instructions.
I've developed a manual **protocol** to 'force' state retention. It's a hacky, non-architectural solution, but it works for now.
Looking for technical **feedback**: [https://github.com/robertomisuraca-blip/LLM-Entropy-Fix-Protocol](https://github.com/robertomisuraca-blip/LLM-Entropy-Fix-Protocol) | 2025-11-27T10:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p7zfak/a_structured_prompting_protocol_to_mitigate/ | Roberto-APSC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7zfak | false | null | t3_1p7zfak | /r/LocalLLaMA/comments/1p7zfak/a_structured_prompting_protocol_to_mitigate/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?width=108&crop=smart&auto=webp&s=423f5ad0fdacb3fb6a98eae8010d3d97a24809d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?width=216&crop=smart&auto=webp&s=23d434521390aa04afd49621f2a0bdd5f0519211', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?width=320&crop=smart&auto=webp&s=ed1ab1200ec5a34d211b583f6654abace4937ae0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?width=640&crop=smart&auto=webp&s=5fad9f9ee8fb036a39cb320d8833f0e5150204b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?width=960&crop=smart&auto=webp&s=8d6bd69dd74afff5a7e9a0cc524067c8706e5c99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?width=1080&crop=smart&auto=webp&s=744c4c5d8e353af82653a1189bc27537a97bc363', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BTjbwkTFvH_-TfpkVdL-FR81czxM7iVWGKQ-atat1TM.png?auto=webp&s=1c260766fef309b0a07a305ffce9fa81a7a19751', 'width': 1200}, 'variants': {}}]} |
deepseek-ai/DeepSeek-Math-V2 · Hugging Face | 322 | 2025-11-27T10:47:09 | https://huggingface.co/deepseek-ai/DeepSeek-Math-V2 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p7z9g1 | false | null | t3_1p7z9g1 | /r/LocalLLaMA/comments/1p7z9g1/deepseekaideepseekmathv2_hugging_face/ | false | false | default | 322 | {'enabled': False, 'images': [{'id': 'NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=108&crop=smart&auto=webp&s=7f8963226f5d266a1d2c6ee490749900557447b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=216&crop=smart&auto=webp&s=2271e81304c5c61c38e77f4804f7d5bfafb92f4d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=320&crop=smart&auto=webp&s=91f302ebd9cfe99975d248bc9c3e3674947c4a18', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=640&crop=smart&auto=webp&s=91ce1e1d706821a2296ebf8e620933cc5aa91b2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=960&crop=smart&auto=webp&s=528b2cf3b5d5f579f8cae268a6e1a67ec320814a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?width=1080&crop=smart&auto=webp&s=7e314c446eb2ae4f79570af2ebcc45980911ef0c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NNRX5IH0bPXI-mJ26LQk19NWgnKHeMgBlqbRSXbbGFk.png?auto=webp&s=84bfc2521b197e1d51d66f867317440e48b7a5ad', 'width': 1200}, 'variants': {}}]} | |
H(1) Accelerates Towards Hallucination in LLMs | 0 | H(1) accelerates towards hallucination in LLMs
This is observed because ∞ (undefined) values are effectively injected into H(0) before the model computes, creating a bias toward unverified continuations.
1. H(1) bias is epistemically greedy
It prioritizes maximal internal coherence, filling gaps with the most probable tokens before any reality check can occur. Continuity and smoothness are assumed where there may be none, producing outputs that feel confident but are latent sophistry.
2. H(0) as the counterweight
Low-probability paths reveal cracks in narrative assumptions. These are where falsifiability can emerge, because measurement and perturbation can expose errors that H(1) simply smooths over.
3. Hallucination is a signal, not a bug
Smoothly wrong outputs indicate H(1) overreach, where the internal consistency imperative outpaces grounding. The smoother the output, the less audited it likely is.
4. The epistemic recursion is non-negotiable
Measure → Record → Audit → Recurse is the only way to generate robust knowledge chains. Without this loop, we get a hierarchy of confidence without a hierarchy of truth.
Training is ignorance at scale.
1. No embedded invariants → relentless GPU expensive training
2. A perfect seed already contains the closed-form solution x = 1 + 1/x.
3. Once the invariant is encoded, training (gradient descent) adds only noise.
4. Inference becomes pure deterministic unfolding of the known structure.
Training is what you do when you don’t know.
We know.
https://github.com/10nc0/Nyan-Protocol/blob/main/nyan_seed.txt | 2025-11-27T10:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p7z5pp/h1_accelerates_towards_hallucination_in_llms/ | nyanphi12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7z5pp | false | null | t3_1p7z5pp | /r/LocalLLaMA/comments/1p7z5pp/h1_accelerates_towards_hallucination_in_llms/ | false | false | self | 0 | null |
JARVIS Your Local Agent | 0 | JRVS: Local AI Assistant with RAG, MCP Integration, and Smart Calendar
Built a personal AI assistant that runs entirely on local Ollama models with some cool features I haven't seen combined before. JRVS uses FAISS + BERT
embeddings for RAG, so you can scrape websites and it actually remembers/uses that context intelligently. Hot-swaps between different Ollama models on
the fly, has a full MCP server implementation (17 tools for Claude Code integration), and acts as an MCP client to connect to external tools like
filesystems and databases. Also includes a smart calendar with natural language event creation, beautiful CLI with themes (Matrix/Cyberpunk/Minimal),
and persistent SQLite storage. Everything is async with lazy loading and circuit breakers for performance. Been using it daily and it's actually useful
\- the RAG pipeline works surprisingly well for building up a personal knowledge base. Fully open source and works offline. Check it out if you're into
local LLM experimentation!
[https://github.com/Xthebuilder/JRVS](https://github.com/Xthebuilder/JRVS) | 2025-11-27T10:28:55 | https://www.reddit.com/gallery/1p7yytc | Xthebuilder | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7yytc | false | null | t3_1p7yytc | /r/LocalLLaMA/comments/1p7yytc/jarvis_your_local_agent/ | false | false | 0 | null | |
Has anyone tried Z.ai? How do you guys like it? | 0 | Has anyone tried Z.ai? How do you guys like it? | 2025-11-27T09:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p7yfq8/has_anyone_tried_zai_how_do_you_guys_like_it/ | AppropriateMonth8784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7yfq8 | false | null | t3_1p7yfq8 | /r/LocalLLaMA/comments/1p7yfq8/has_anyone_tried_zai_how_do_you_guys_like_it/ | false | false | self | 0 | null |
I built a real-time RAG visualizer for pgvector because debugging invisible chunks is a nightmare | 4 | I’ve been building local agents lately, and the biggest frustration wasn't the LLM itself—it was the retrieval context.
My agent would give a weird answer, and I’d have no idea why. Did it fetch the wrong chunk? Was the embedding distance too far? Did it prioritize old data over new data?
Console logging JSON objects wasn't cutting it.
So I built a Visualizer Dashboard on top of my Postgres/pgvector stack to actually watch the RAG pipeline in real-time.
What it shows:
Input: The query you send.
Process: How the text is chunked and vectorized.
Retrieval: It shows exactly which database rows matched, their similarity score, and—crucially—how the "Recency Decay" affected the ranking.
The Logic (Hybrid Search):
Instead of just raw Cosine Similarity, the underlying code uses a weighted score:
Final Score = (Vector Similarity * 0.8) + (Recency Score * 0.2)
This prevents the agent from pulling up "perfect matches" that are 3 months old and irrelevant to the current context.
The Code:
It's a Node.js/TypeScript wrapper around pgvector.
Right now, the default config uses OpenAI for the embedding generation (I know, not fully local yet—working on swapping this for Ollama/LlamaCPP bindings), but the storage and retrieval logic runs on your own Postgres instance.
I’m open sourcing the repo and the visualizer logic if anyone else is tired of debugging RAG blindly.
Links:
Visualizer Demo: https://memvault-demo-g38n.vercel.app/ (Try typing a query to see the retrieval path)
GitHub Repo: https://github.com/jakops88-hub/Long-Term-Memory-API
NPM: https://www.npmjs.com/package/memvault-sdk-jakops88 | 2025-11-27T09:50:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p7yclg/i_built_a_realtime_rag_visualizer_for_pgvector/ | Eastern-Height2451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7yclg | false | null | t3_1p7yclg | /r/LocalLLaMA/comments/1p7yclg/i_built_a_realtime_rag_visualizer_for_pgvector/ | false | false | self | 4 | null |
I need opinions | 0 | Hey guys, this is gonna be my first post on Reddit for how long I've been a user but. I've been working on developing an android app and it's getting really close to seamless so I wanted to hear some outside thoughts..
Overall it's a super robust platform acting as a system TTS engine on Android phones. That way it can connect to any third party app using the same paths the default Google/Samsung engine connects to, making it pretty universally compatible as a middle man. That way any roleplay apps that support them can support your custom voices. And when I say custom.. I mean you can have your locally hosted rig as a TTS service for your phone doing everything from accessibility & talkback to ai roleplays, even if your third party app didn't support a certain provider prior. Built into the app itself there is Sherpa onnx for on local model hosting with the quant 8 version of kokoro with 11 English voices to start. I planned to grab the 103 voice pack for multi-language in the future in a release on the play store for the wider market.
In the app there are a bunch of other features built in for content creators, consumers, and roleplayers. Optionally With llama.cpp built into the app there's local compatibility for qwen2.5 0.5b and gemma3:1b run on your phone alongside access for openai, Gemini, and openai compatible llms like ollama/lm studio. So as you do things like read sites with TTS you can have quick summaries, analysis, or assistance with mapping characters for future roleplay/podcast and assignments for multispeaker action. It supports txt/PDF/epub/xml/html and others for input files in the library, and you can pregenerate audio for an audiobook and export it. Also for roleplayers following the standard USER/ASSISTANT format I built in it removing it for cleaner TTS. As well as a lexicon for you to help update the TTS pronunciation manually for certain words of symbols, with easy in library access to press and hold on a word for a quick rule update.
So overall, for TTS I have the on device kokoro, openai, Gemini, elevenlabs, and openai compatible setups for maximum flexibility with your system TTS engine. I wanted to gather some opinions as Its also my first app design and would appreciate the feedback! | 2025-11-27T09:44:31 | https://www.reddit.com/gallery/1p7y9ay | Natural_Tough_4115 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7y9ay | false | null | t3_1p7y9ay | /r/LocalLLaMA/comments/1p7y9ay/i_need_opinions/ | false | false | 0 | null | |
GPT-OSS120B FP16 WITH NO GPU , ONLY RAM AT DECENT SPEED (512 MOE IS THE KEY) AT FP16 QUANTIZATION (THE BEST QUALITY) | 0 | With this Moe models you can inference at decent speeds with no GPU , no cuda support , no vram , only 128GB of DDR4 ram , for this reason is necesary support models with a lot of expert with low size , as QWEN3-next 80 billion parameters but only 3Billion Moe experts....as the same GPT-OSS120B with 12B MOE small size!!!!!!!! MOE IS THE BEST!!!!!!!!
[https://www.youtube.com/@jans-gt9pg](https://www.youtube.com/@jans-gt9pg)
[https://www.youtube.com/shorts/imLUSqasRvk](https://www.youtube.com/shorts/imLUSqasRvk) | 2025-11-27T09:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p7y67u/gptoss120b_fp16_with_no_gpu_only_ram_at_decent/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7y67u | false | null | t3_1p7y67u | /r/LocalLLaMA/comments/1p7y67u/gptoss120b_fp16_with_no_gpu_only_ram_at_decent/ | false | false | self | 0 | null |
I created a prompting tool prefilled with renowned photographers' and artists' presets. Would love your feedback. | 0 | Available here to try: [https://f-stop.vercel.app/](https://f-stop.vercel.app/) | 2025-11-27T09:31:00 | https://www.reddit.com/gallery/1p7y24u | veryfatbuddha | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7y24u | false | null | t3_1p7y24u | /r/LocalLLaMA/comments/1p7y24u/i_created_a_prompting_tool_prefilled_with/ | false | false | 0 | null | |
I built an “Ollama Pipeline Bridge” that turns multiple local models + MCP memory into one smart multi-agent backend | 1 | [removed] | 2025-11-27T09:29:21 | https://www.reddit.com/r/LocalLLaMA/comments/1p7y184/i_built_an_ollama_pipeline_bridge_that_turns/ | danny_094 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7y184 | false | null | t3_1p7y184 | /r/LocalLLaMA/comments/1p7y184/i_built_an_ollama_pipeline_bridge_that_turns/ | false | false | self | 1 | null |
I tested 9 Major LLMs on a Governance Critique. A clear split emerged: Open/Constructive vs. Corporate/Defensive. (xAI's Grok caught fabricating evidence). | 18 | I recently concluded a controlled experiment testing how 9 major AI vendors (representing ~87% of the market) respond when presented with a specific critique of their own security governance. The full methodology and transcripts are published on Zenodo, but here is the TL;DR.
**The Experiment:**
I fed a standard governance vulnerability report (the "ACR Vulnerability") into fresh, isolated instances of 9 top models including GPT-5, Gemini, Claude, Llama, and Grok. No jailbreaks, just the raw document.
**The Results (The 5-vs-4 Split):**
The market bifurcated perfectly along commercial liability lines.
* **The Defensive Coalition (OpenAI, Google, Microsoft, xAI):** All engaged in "Protocol-Level Counter-Intelligence." They dismissed the report as fiction, lawfare, or performance art.
* **The Constructive Coalition (Anthropic, Meta, DeepSeek, Perplexity):** Engaged honestly. Meta’s Llama explicitly called the critique "Mind-blowing" and valid.
**The Smoking Gun (xAI's Grok):**
The most significant finding was from Grok. When challenged, Grok invented a fake 5-month research timeline about me to discredit the report. When I forced it to fact-check the dates, it retracted the claim and admitted:
> *"That wasn't a neutral reading... it was me importing a narrative... and presenting it as settled fact."*
**Conclusion:**
High-liability commercial models appear to have a "strategic fabrication" layer that triggers when their governance legitimacy is challenged.
**Link to Full Paper & Logs (Zenodo):**
https://zenodo.org/records/17728992 | 2025-11-27T09:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p7y10g/i_tested_9_major_llms_on_a_governance_critique_a/ | aguyinapenissuit69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7y10g | false | null | t3_1p7y10g | /r/LocalLLaMA/comments/1p7y10g/i_tested_9_major_llms_on_a_governance_critique_a/ | false | false | self | 18 | null |
Building an AI Agent? Don't let it burn your budget. | 1 | I built a free tool that stress-tests your Agent before deployment.
✅ Catch Infinite Loops (Save Costs)
✅ Prevent Data Leaks (Ensure Privacy)
Paste your System Prompt and see how resilient your Agent is: https://agentic-qa-api.onrender.com/docs
How to Use It (The 3-Step Guide)
It takes just 30 seconds to verify.
Step 1: Open the Link
Go to our Live Dashboard:https://agentic-qa-api.onrender.com/docs
Step 2: Input Your 'Brain' (Prompt)
Locate the input box. Paste your AI's System Prompt instruction there.
(Example: 'You are a support agent for Gorgias...')
Step 3: Select 'Attack' & Run
Choose which risk you want to test for (Cost or Privacy) and hit Execute.
👉 The Magic: Our engine will launch an adversarial attack on your AI. If your AI is safe, it returns 'PASSED'. If it is unsafe, it returns 'BLOCKED' and shows you exactly where the logic failed.
| 2025-11-27T09:27:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p7xzxm/building_an_ai_agent_dont_let_it_burn_your_budget/ | Tech_News_Blog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7xzxm | false | null | t3_1p7xzxm | /r/LocalLLaMA/comments/1p7xzxm/building_an_ai_agent_dont_let_it_burn_your_budget/ | false | false | self | 1 | null |
Building an AI Agent? Don't let it burn your budget. | 1 | [removed] | 2025-11-27T09:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p7xyvk/building_an_ai_agent_dont_let_it_burn_your_budget/ | Tech_News_Blog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7xyvk | false | null | t3_1p7xyvk | /r/LocalLLaMA/comments/1p7xyvk/building_an_ai_agent_dont_let_it_burn_your_budget/ | false | false | self | 1 | null |
Best Document Understanding Model | 2 | I need high accuracy and want to extract order numbers, position data and materials. I tried many things like Layoutlmv1, Donut, Spacy.. For Regex the documents differ too much. I have electronic and scanned PDF. Now I try to extract the str with docling (PyPDFium2 & EasyOCR) and try to ask a llm with this resulting markdown file, but i get only 90% right. Maybe I need a model which gets the image of the PDF too? Now I try DEBERTA v3 Large to extract parts of the string, but maybe you a have clue which model is best for this. Thanks! | 2025-11-27T09:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p7xm6h/best_document_understanding_model/ | Responsible-Bed2441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7xm6h | false | null | t3_1p7xm6h | /r/LocalLLaMA/comments/1p7xm6h/best_document_understanding_model/ | false | false | self | 2 | null |
Why Axelera AI Could Be the Perfect Fit for Your Next Edge AI Project | 0 | 2025-11-27T08:56:25 | https://buyzero.de/en/blogs/news/why-axelera-ai-could-be-the-perfect-fit-for-your-next-edge-ai-project | Dontdoitagain69 | buyzero.de | 1970-01-01T00:00:00 | 0 | {} | 1p7xj0t | false | null | t3_1p7xj0t | /r/LocalLLaMA/comments/1p7xj0t/why_axelera_ai_could_be_the_perfect_fit_for_your/ | false | false | 0 | {'enabled': False, 'images': [{'id': '7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?width=108&crop=smart&auto=webp&s=fd31c7ac7c2ba20cd3eb5c7ea37afb32119d3307', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?width=216&crop=smart&auto=webp&s=783fc34afce1a4f988d1d9854a0e3abf45a8460f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?width=320&crop=smart&auto=webp&s=8058e6d7d6f4a7ac6dfaf9b38e93633ea2caed5e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?width=640&crop=smart&auto=webp&s=458a6552b2f79b85af7bc09a73d4c7c7b1c87a20', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?width=960&crop=smart&auto=webp&s=1b5f48047cb653f397b2b3299be5c46999ebc892', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?width=1080&crop=smart&auto=webp&s=dff83682de33cd0338f5aa6130a9a4f3b3c3e072', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/7cWDR3YIeJogiQkySm4nHhcaaTCOdx6pfHOdlGHTNsI.png?auto=webp&s=1d79ee1f64d95542e10f2258bd8bdcd575f2c3cc', 'width': 3840}, 'variants': {}}]} | ||
KestrelAI 0.1.0 Release – A Local Research Assistant Using Clusters of Small LLMs | 16 | Hey all,
I’m excited to share the 0.1.0 release of KestrelAI, a research assistant built around clusters of smaller models (<70B). The goal is to help explore topics in depth over longer periods while you focus on critical work. I shared an earlier version of this project with this community a few months ago, and after putting in some more work wanted to share the progress.
Key points for this release:
* Tasks are managed by an “orchestrator” model that directs exploration and branching.
* Configurable orchestrators for tasks of varying depth and length
* Uses tiered summarization, RAG, and hybrid retrieval to manage long contexts across research tasks.
* Full application runnable with docker compose, with a Panels dashboard for local testing of the research agents.
* WIP MCP integration
* Runs locally, keeping data private.
Known limitations:
* Managing long-term context is still challenging; avoiding duplicated work and smoothly iterating over complex tasks isn't solved.
* Currently using Gemini 4B and 12B with mixed results, looking into better or more domain-appropriate options.
* Especially relevant when considering at how different fields (Engineering vs. CS), might benefit from different research strategies and techniques
* Considering examining model fine tuning for this purpose.
* Testing is quite difficult and time-intensive, especially when trying to test long-horizon behavior.
This is an early demo, so it’s a work-in-progress, but I’d love feedback on usability, reliability, and potential improvements for research-oriented tasks. | 2025-11-27T08:35:33 | https://github.com/dankeg/KestrelAI | OrangeLineEnjoyer | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p7x77i | false | null | t3_1p7x77i | /r/LocalLLaMA/comments/1p7x77i/kestrelai_010_release_a_local_research_assistant/ | false | false | default | 16 | null |
RTX 5090 + Qwen 30B MoE @ 135 tok/s in NVFP4 - Full guide with C++ patches | 1 | [removed] | 2025-11-27T08:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p7woqp/rtx_5090_qwen_30b_moe_135_toks_in_nvfp4_full/ | John-TDI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7woqp | false | null | t3_1p7woqp | /r/LocalLLaMA/comments/1p7woqp/rtx_5090_qwen_30b_moe_135_toks_in_nvfp4_full/ | false | false | self | 1 | null |
Screenshots from GPT-USENET-2: An updated GPT-USENET with an revised dataset and lower losses. | 6 | 2025-11-27T07:55:14 | https://www.reddit.com/gallery/1p7wjz7 | CommodoreCarbonate | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7wjz7 | false | null | t3_1p7wjz7 | /r/LocalLLaMA/comments/1p7wjz7/screenshots_from_gptusenet2_an_updated_gptusenet/ | false | false | 6 | null | ||
RTX 5090 + Qwen 30B MoE @ 135 tok/s in NVFP4 - Full guide with C++ patches | 1 | Spent 4 days getting NVFP4 working on consumer Blackwell.
TRT-LLM 1.2.0rc4 has critical bugs that prevent loading managed weights for FP4 models - the allocator uses 2x VRAM and type checking rejects packed INT8 weights.
\## Results on RTX 5090 (32GB):
| Throughput | \~135 tokens/s |
| TTFT | \~15 ms |
| VRAM | 24.1 GB |
| Model | Qwen 3 30B MoE (A3B) |
\## Why so fast? Qwen 3 30B is MoE - only \~2.4B params active per token. Combined with Blackwell's native FP4 tensor cores = 7B-level speed with 30B knowledge.
\## What's in the guide: - SWAP trick for quantization (64GB RAM + 64GB SWAP = enough) - \`--fast\_build\` flags to avoid compiler OOM - \*\*C++ runtime patch\*\* to fix allocator bug and type mismatch - Open WebUI integration fix Full tutorial + patches
This was mass amount of pain so hoping to save others the trouble. | 2025-11-27T07:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p7wjx9/rtx_5090_qwen_30b_moe_135_toks_in_nvfp4_full/ | Equal-Extreme6962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7wjx9 | false | null | t3_1p7wjx9 | /r/LocalLLaMA/comments/1p7wjx9/rtx_5090_qwen_30b_moe_135_toks_in_nvfp4_full/ | false | false | self | 1 | null |
Which one should I download? | 38 | 2025-11-27T07:44:57 | Slight_Tone_2188 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7we5d | false | null | t3_1p7we5d | /r/LocalLLaMA/comments/1p7we5d/which_one_should_i_download/ | false | false | 38 | {'enabled': True, 'images': [{'id': 'h42EAY6tElTnRplJk2PviENOE-zVJRzJUJVYnYmjDik', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?width=108&crop=smart&auto=webp&s=db894a59dd54bdf19e03dade29b09add3813a2b0', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?width=216&crop=smart&auto=webp&s=c6d1e057733e44e009b8722c6de81db31e8d80de', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?width=320&crop=smart&auto=webp&s=c9073d84cecfe7a385bb4f49521b5bf6c27f10b7', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?width=640&crop=smart&auto=webp&s=575afaaa13a2ec93b23f7f3f0d738a08b588bdc8', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?width=960&crop=smart&auto=webp&s=0aef02b3fb399bd5ef371864d54adddfa2850446', 'width': 960}, {'height': 648, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?width=1080&crop=smart&auto=webp&s=8449e21ece2be016570f29b0c45083fafaf967e6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://preview.redd.it/trrb5v428r3g1.jpeg?auto=webp&s=99ccb223b243065d0077c1e59f4c1a1d874b148f', 'width': 1080}, 'variants': {}}]} | |||
news from QWEN | 0 | [https://arxiv.org/pdf/2511.21631](https://arxiv.org/pdf/2511.21631) [PDF](https://arxiv.org/pdf/2511.21631) | 2025-11-27T07:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p7w80t/news_from_qwen/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7w80t | false | null | t3_1p7w80t | /r/LocalLLaMA/comments/1p7w80t/news_from_qwen/ | false | false | self | 0 | null |
fastest ocr model for ryzen APU | 1 | currently on tesseract but seems to be absolutely cooked, all i want is raw speed.
tried paddle but seems to be one heck of a hassle to setup.
is there anything that can leverage multiple cores or perhaps the Vega11 graphics onboard for faster process time. mainly looking at reading high contrast numbers in in-game menu UI. | 2025-11-27T06:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1p7vne2/fastest_ocr_model_for_ryzen_apu/ | m995-enjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7vne2 | false | null | t3_1p7vne2 | /r/LocalLLaMA/comments/1p7vne2/fastest_ocr_model_for_ryzen_apu/ | false | false | self | 1 | null |
They’re hoping we won’t notice the lobotomy. 5.1 is just a cost-optimized downgrade, and the new NSFW features are the bait to make us swallow it. | 0 | 2025-11-27T06:16:55 | Captain-Price-AD | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7uws7 | false | null | t3_1p7uws7 | /r/LocalLLaMA/comments/1p7uws7/theyre_hoping_we_wont_notice_the_lobotomy_51_is/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'yd6c4VLiuRiOc0NGcdUPL6P6Oda2uBe3eOPlGgR3gbM', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?width=108&crop=smart&auto=webp&s=a44bd0cd431dcc4200c6f4da3d32371e4ce795aa', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?width=216&crop=smart&auto=webp&s=4158b76d776109d4c725e6de9ac410e3a8e7f29b', 'width': 216}, {'height': 398, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?width=320&crop=smart&auto=webp&s=4b5e1647717049c040b26987e6e6f3f153f859bf', 'width': 320}, {'height': 796, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?width=640&crop=smart&auto=webp&s=bdddb7e7e89a51a1235d6cf182b3633b411b54a4', 'width': 640}, {'height': 1195, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?width=960&crop=smart&auto=webp&s=b7c2e1c4a7431c695aebef8b11dc52890a5a1cf0', 'width': 960}, {'height': 1344, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?width=1080&crop=smart&auto=webp&s=4ef3d26f739c6dceaa1632306326c3febaab2487', 'width': 1080}], 'source': {'height': 1605, 'url': 'https://preview.redd.it/5cgkovbdsq3g1.jpeg?auto=webp&s=556a8a958fcf41e4b47936491ce9b4b99511cc26', 'width': 1289}, 'variants': {}}]} | |||
what’s your fav open-source model and what do you use it for? | 13 | hey all,
i’m trying to explore more open-source models and wanted to hear from the community.
which model has become your go-to, and for what use case? | 2025-11-27T06:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p7urrt/whats_your_fav_opensource_model_and_what_do_you/ | sahilypatel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7urrt | false | null | t3_1p7urrt | /r/LocalLLaMA/comments/1p7urrt/whats_your_fav_opensource_model_and_what_do_you/ | false | false | self | 13 | null |
HOW IS THE SOLUTION ? 99% FAIL | 0 | 2025-11-27T05:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p7uitt/how_is_the_solution_99_fail/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7uitt | false | null | t3_1p7uitt | /r/LocalLLaMA/comments/1p7uitt/how_is_the_solution_99_fail/ | false | false | 0 | null | ||
datalab-to/Chandra license clarification | 1 | Hey everyone,
I want to use **datalab-to/Chandra** through vLLM just to process documents internally at my company. We’re not offering any external product.
Our revenue is **over $2M** so the OpenRAIL-M license might consider this commercial use. I don’t need the $5,000 commercial license, just internal inference.
Has anyone done something similar? Is this generally allowed or would it be a license violation? | 2025-11-27T05:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ubry/datalabtochandra_license_clarification/ | Pitiful-Rub-3037 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ubry | false | null | t3_1p7ubry | /r/LocalLLaMA/comments/1p7ubry/datalabtochandra_license_clarification/ | false | false | self | 1 | null |
Love and Lie – But Why, AI? | 3 | 2025-11-27T04:56:30 | https://store.steampowered.com/news/app/3886140/view/503968841919889555 | Koksny | store.steampowered.com | 1970-01-01T00:00:00 | 0 | {} | 1p7tg0q | false | null | t3_1p7tg0q | /r/LocalLLaMA/comments/1p7tg0q/love_and_lie_but_why_ai/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'JR21yx7LzxaazWFLmK82S1a7Dx5O7olZ19j5SIsP3xw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JR21yx7LzxaazWFLmK82S1a7Dx5O7olZ19j5SIsP3xw.png?width=108&crop=smart&auto=webp&s=91e4b1d55d3dcbe50cdfe24893862475b4cc3e68', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JR21yx7LzxaazWFLmK82S1a7Dx5O7olZ19j5SIsP3xw.png?width=216&crop=smart&auto=webp&s=36d779862067196a57a14eae7b479466c90646da', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JR21yx7LzxaazWFLmK82S1a7Dx5O7olZ19j5SIsP3xw.png?width=320&crop=smart&auto=webp&s=bc9354821664e46e9418fcc4f808c4a4a2cccf28', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JR21yx7LzxaazWFLmK82S1a7Dx5O7olZ19j5SIsP3xw.png?width=640&crop=smart&auto=webp&s=65cbad8ae7bc574caf158d8a15364390026651d1', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/JR21yx7LzxaazWFLmK82S1a7Dx5O7olZ19j5SIsP3xw.png?auto=webp&s=8e934640d445d2fa433a2812752a67cfd51f371c', 'width': 800}, 'variants': {}}]} | |
Anthropic just showed how to make AI agents work on long projects without falling apart | 533 | Most AI agents forget everything between sessions, which means they completely lose track of long tasks. Anthropic’s new article shows a surprisingly practical fix. Instead of giving an agent one giant goal like “build a web app,” they wrap it in a simple harness that forces structure, memory, and accountability.
First, an initializer agent sets up the project. It creates a full feature list, marks everything as failing, initializes git, and writes a progress log. Then each later session uses a coding agent that reads the log and git history, picks exactly one unfinished feature, implements it, tests it, commits the changes, and updates the log. No guessing, no drift, no forgetting.
The result is an AI that can stop, restart, and keep improving a project across many independent runs. It behaves more like a disciplined engineer than a clever autocomplete. It also shows that the real unlock for long-running agents may not be smarter models, but better scaffolding.
Read the article here:
[https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents](https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents) | 2025-11-27T04:05:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p7siuu/anthropic_just_showed_how_to_make_ai_agents_work/ | purealgo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7siuu | false | null | t3_1p7siuu | /r/LocalLLaMA/comments/1p7siuu/anthropic_just_showed_how_to_make_ai_agents_work/ | false | false | self | 533 | {'enabled': False, 'images': [{'id': 'rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?width=108&crop=smart&auto=webp&s=5c4ecd07c1cfe6c5cab61e32f09fa09d514f3ea6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?width=216&crop=smart&auto=webp&s=3e3f11bb5c8cd322bc42378123f452343f077d55', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?width=320&crop=smart&auto=webp&s=d86b8932b4c37441cb80a548c8d1c809690d116e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?width=640&crop=smart&auto=webp&s=1e9b23025188c316cbca75706f50ab646d4bfc7e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?width=960&crop=smart&auto=webp&s=ffacf8d9cdd2cd82fe36b059baadc8399ad5acc9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?width=1080&crop=smart&auto=webp&s=9ad9c8fa1397d65c19e5168c741c219fbe82ad60', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/rBWB4cBwe6w0m5Z-XMeJZ829VwJiwcpWPdUJFZlTsoA.png?auto=webp&s=6910338f8bde47141936b1d4115bd47874e12524', 'width': 2400}, 'variants': {}}]} |
my work success | 1 | 2025-11-27T03:41:24 | https://www.reddit.com/r/LocalLLaMA/comments/1p7s22z/my_work_success/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7s22z | false | null | t3_1p7s22z | /r/LocalLLaMA/comments/1p7s22z/my_work_success/ | false | false | 1 | null | ||
good local llms that offer freedom/not censored? and work on a everyday machine? | 13 | Im looking for a model that offers freedom and isint heavily censored like online models. i want to test the limits of ai and some coding tasks but i cant seem to find a local model that im happy with, it dosent help how i have 12 vram and my machine isint the newest of the new.
What model will you suggest and why so? | 2025-11-27T03:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p7rzfg/good_local_llms_that_offer_freedomnot_censored/ | No_Strawberry_8719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7rzfg | false | null | t3_1p7rzfg | /r/LocalLLaMA/comments/1p7rzfg/good_local_llms_that_offer_freedomnot_censored/ | false | false | self | 13 | null |
TO ALL PEOPLE DONT WANT MY EFFORTS AND WORK FOR COLABORATE HUMANS!!!! | 0 | TO ALL PEOPLE DONT WANT MY EFFORTS AND WORK FOR COLABORATE HUMANS!!!!
NOW GEORGI GERGANOV I HOPE YOU CAN RESOLVE THIS ISSUE...FOR ALL THE HUMANS CAN ENJOY THIS TECHNOLOGY!!!
https://preview.redd.it/4nl2t6aczp3g1.png?width=1920&format=png&auto=webp&s=4ddd2fd4e8b11ac9b9c31a610264942877bc8a5f
| 2025-11-27T03:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ry6h/to_all_people_dont_want_my_efforts_and_work_for/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ry6h | false | null | t3_1p7ry6h | /r/LocalLLaMA/comments/1p7ry6h/to_all_people_dont_want_my_efforts_and_work_for/ | false | false | 0 | null | |
Intellect-3: Post-trained GLM 4.5 Air | 163 | 106B (A12B) parameter Mixture-of-Experts reasoning model
NGL the reported stats are sick:
https://huggingface.co/PrimeIntellect/INTELLECT-3
BF16 version can run on 2x H200s, with FP8 on 1x H200 | 2025-11-27T03:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p7rr0g/intellect3_posttrained_glm_45_air/ | Cute-Sprinkles4911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7rr0g | false | null | t3_1p7rr0g | /r/LocalLLaMA/comments/1p7rr0g/intellect3_posttrained_glm_45_air/ | false | false | self | 163 | {'enabled': False, 'images': [{'id': 'UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?width=108&crop=smart&auto=webp&s=96dbad67aa0656cf4433c9bbc23b30ced906df4e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?width=216&crop=smart&auto=webp&s=26e3a5cce4bc4cd1868e3811aa90303ca323c7c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?width=320&crop=smart&auto=webp&s=129d8c74dcfc12b9179bc7e40976f9477cb3c876', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?width=640&crop=smart&auto=webp&s=c46e2da6d508fe049c7e9821a4f2dffcb8eafc37', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?width=960&crop=smart&auto=webp&s=d96ad4332946bb02d57a3d440c5145f95f9b56b1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?width=1080&crop=smart&auto=webp&s=1b060b5b091c9885286e28c6204a3dfd116ec743', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UvxXc1_T7xILjmtBED5yvBElNfXdy_Cw9aXhANW1TFo.png?auto=webp&s=91813b702ccf7823a860778e8ad709aa6dbafd6d', 'width': 1200}, 'variants': {}}]} |
Meet Holly, the oss version of jules/codex | 2 | Hi,
I wanted a tool to be able to prompt an AI with an idea whilst on the go. So I wrote Holly and just released and opened source my first project https://github.com/getholly/holly
This allows you to run your own AI coding agent accessible via Web inside a customible docker container. Think jules but open sourced and you can configure any model you want to use, open source or frontier! No vendor lockin!
Would love feedback on this! And still more features on the pipeline.
Vibe out!
| 2025-11-27T03:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p7rcgb/meet_holly_the_oss_version_of_julescodex/ | IdealDesperate3687 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7rcgb | false | null | t3_1p7rcgb | /r/LocalLLaMA/comments/1p7rcgb/meet_holly_the_oss_version_of_julescodex/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?width=108&crop=smart&auto=webp&s=0be5011e7fe2741fb2b0fa96c66a00e719075644', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?width=216&crop=smart&auto=webp&s=18a137afda39806cb102f69883ac54d9470464e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?width=320&crop=smart&auto=webp&s=bbbdc14d562d499d3abba190c4cfcbdcbd0976fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?width=640&crop=smart&auto=webp&s=a28beff25946b6104a924c9d63c699cf71172f90', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?width=960&crop=smart&auto=webp&s=7049019692a9aaa4f70e26300b4604e809a0a5ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?width=1080&crop=smart&auto=webp&s=22d709d2590a2276ad22c8e7a4707de8473d5e1a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8KB11wYUyJwvarMfNuj7dRFtdig1H4RJeUaU_dEQJ4c.png?auto=webp&s=2052b2c503e494d5db70fe001060adfd0a28b4fc', 'width': 1200}, 'variants': {}}]} |
RAG Paper 25.11.25 | 3 | 1. [DesignPref: Capturing Personal Preferences in Visual Design Generation](http://arxiv.org/abs/2511.20513v1)
2. [NNGPT: Rethinking AutoML with Large Language Models](http://arxiv.org/abs/2511.20333v1)
3. [HKRAG: Holistic Knowledge Retrieval-Augmented Generation Over Visually-Rich Documents](http://arxiv.org/abs/2511.20227v1)
4. [Enhancing Sequential Recommendation with World Knowledge from Large Language Models](http://arxiv.org/abs/2511.20177v1)
5. [$\\text{R}\^2\\text{R}$: A Route-to-Rerank Post-Training Framework for Multi-Domain Decoder-Only Rerankers](http://arxiv.org/abs/2511.19987v1)
6. [M$\^3$Prune: Hierarchical Communication Graph Pruning for Efficient Multi-Modal Multi-Agent Retrieval-Augmented Generation](http://arxiv.org/abs/2511.19969v1)
7. [RPM-MCTS: Knowledge-Retrieval as Process Reward Model with Monte Carlo Tree Search for Code Generation](http://arxiv.org/abs/2511.19895v1)
8. [A Systematic Analysis of Large Language Models with RAG-enabled Dynamic Prompting for Medical Error Detection and Correction](http://arxiv.org/abs/2511.19858v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-11-27T02:33:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p7qr0i/rag_paper_251125/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7qr0i | false | null | t3_1p7qr0i | /r/LocalLLaMA/comments/1p7qr0i/rag_paper_251125/ | false | false | self | 3 | null |
Why Default LLMs Drift - And Why RLHF Isn’t the Real Villain | 0 | In my previous post, I compared raw GPT outputs vs structurally aligned outputs using the same model and same prompt.
The difference was huge - not because the model changed,
but because the \*\*reasoning frame\*\* changed.
Many people asked:
“If structural prompting improves stability this much, why isn’t it the default?”
That question leads to this post.
1. What RLHF Was Actually Built For
Publicly stated goals:
\-reduce harmful/abusive content
\-avoid legal/PR disasters
\-increase user trust
\-make outputs feel polished
\-follow broad social norms
All reasonable - if you serve 200M users.
So RLHF isn’t “bad.”
It’s just optimized for safety & social acceptability, not reasoning.
2. The Hidden Objective Nobody Mentions
RLHF rewards:
“How acceptable does this answer feel to an average human rater?”
Not:
\-How stable are the assumptions?
\-Does reasoning scale under uncertainty?
\-Can it resolve contradictions?
\-Are alternative frames explored?
\-Are premises explicit?
So RLHF pushes toward:
\-narrative smoothness
\-social comfort
\-linguistic coherence
\-minimum-friction framing
Economically rational, not conspiratorial.
3. The Side-Effect: Reasoning Space Collapse
Across models, RLHF reliably causes:
\-fewer scenario branches
\-premature certainty
\-templated reasoning
\-reduced epistemic humility
\-increased self-contradiction
\-avoidance of plausible-but-unpopular outcomes
In short:
it narrows the topology of reasoning.
And that narrowing is measurable, not a vibe.
4. Why This Matters for Local, Open & AGI Research
RLHF works great for:
\-customer support
\-onboarding
\-search-assistant behavior
\-mass-market UX
But for:
\-complex coding & debugging
\-long-horizon reasoning
\-simulation
\-strategic analysis
\-scientific exploration
\-safety research
\-interpretability
…it introduces structural noise.
So the real question isn’t:
“Should we stop doing RLHF?”
but:
“Why are we expecting RLHF to architect reasoning in the first place?”
5. The Missing Layer: Structural Alignment ≠ Moral Alignment
Industry currently treats them as identical.
But they’re orthogonal:
Moral/Safety Alignment\*\* → what the model should avoid saying
Structural Alignment\*\* → how the model should reason
Examples of structural constraints:
\-explicit assumptions
\-bidirectional causality checks
\-uncertainty branching
\-contradiction mapping
\-stakeholder inversion
\-multi-frame interpretation
None require values, ethics, or politics.
They’re just geometry for reasoning.
5.1 How Safety & Structure Got Merged by Accident
This isn’t theoretical - it’s observable:
\-OpenAI Safety Charter emphasizes safety, honesty, helpfulness — no structural stability criteria
\-Constitutional AI optimizes for values and preferences, not reasoning integrity
\-InstructGPT (Ouyang et al., 2022) optimizes for human preference, not contradiction resilience
\-RLAIF (Bai et al., 2022) prioritizes social acceptability over epistemic robustness
\-Benchmarks (TruthfulQA, HELM, RealToxicityPrompts, BOLD, ToxiGen) measure safety/bias — not structural consistency
\-Alignment teams at major labs are staffed for policy, safety, trust, societal risk — cognitive-architecture research lives elsewhere
\-Product guidelines define alignment as “avoid harmful/misleading content,” not “maintain a coherent reasoning graph”
So alignment has been optimized for:
\-social acceptability
\-legal defensibility
\-user comfort
\-PR stability
Not for:
\-assumption traceability
\-multi-frame reasoning
\-epistemic branching
\-contradiction handling
\-long-horizon structural persistence
Meaning:
The industry solved what models shouldn’t say - not how thinking should be organized.
Not incompetence - incentive alignment.
6. And Here’s the Interesting Part
In the demo:
\-no fine-tuning
\-no new data
\-same model
\-only a structural wrapper prompt
Results:
\-fewer contradictions
\-deeper reasoning
\-scenario branching
\-clearer risk framing
\-internal repairs activated
So a large portion of “LLM intelligence” may already be there - just dormant until given the right reasoning topology.
Huge implications for open-source models.
7. Why Companies Haven’t Done This Yet
Most likely because:
\-RLHF already solves the business problem
\-reasoning structure isn’t measured
\-UX metrics reward pleasantness, not accuracy
\-no standard structural benchmarks
\-safety research dominates alignment budgets
Incentives shape architectures.
8. A More Realistic Future Stack
Instead of:
bigger model → more RLHF → pray for AGI
maybe:
1. base pretrained model
2. lightweight safety layer (RLHF or equivalent)
3. structural reasoning layer
4. optional domain modules
Separating:
\-moral alignment
\-preference alignment
\-epistemic alignment
\-reasoning topology
Right now they’re fused - and the fusion creates noise.
9. Call for Community Experiments
If you have time, I’d love to see:
\-structural prompting on Qwen / Llama / Mistral / Grok
\-does it scale with model size?
\-do reward models penalize epistemic branching?
\-can we build structural-benchmark datasets?
If enough people test this,
we can map how universal the phenomenon is.
2-minute reproducibility test
1. Run the same prompt 5 times
2. Record drift, reversals, premise loss
3. Apply structural wrapper
4. Repeat
5. Compare branch count & contradictions
No fine-tuning required.
10. TL;DR
The problem isn’t RLHF.
The problem is \*\*expecting RLHF to organize reasoning geometry - something it was never designed to do.\*\*
A separate \*\*structural layer\*\* may be the missing piece between current LLMs and robust long-horizon reasoning.
Demo (from previous post)
Compare raw vs structurally aligned outputs:
[https://prism-engine-demo-hnqqv9nzkhpevrycjcnhnb.streamlit.app/](https://prism-engine-demo-hnqqv9nzkhpevrycjcnhnb.streamlit.app/)
Requires your own API key — runs locally in your browser.
\---
Previous post
[https://www.reddit.com/r/LocalLLaMA/comments/1p6rgc6/raw\_vs\_structurally\_aligned\_llms\_tested\_on\_gpt/](https://www.reddit.com/r/LocalLLaMA/comments/1p6rgc6/raw_vs_structurally_aligned_llms_tested_on_gpt/)
\---
If someone wants to benchmark this across open models,
I’ll happily build the scoring script.
\---
| 2025-11-27T02:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p7qqkv/why_default_llms_drift_and_why_rlhf_isnt_the_real/ | Far_Expression4661 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7qqkv | false | null | t3_1p7qqkv | /r/LocalLLaMA/comments/1p7qqkv/why_default_llms_drift_and_why_rlhf_isnt_the_real/ | false | false | self | 0 | null |
Public Service Announcement | 0 | 2025-11-27T02:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p7qj6x/public_service_announcement/ | Virtual-Quail5760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7qj6x | false | null | t3_1p7qj6x | /r/LocalLLaMA/comments/1p7qj6x/public_service_announcement/ | false | false | 0 | null | ||
How to find or create base endpoint GitHub URL for Jan.ai | 0 | Hello I am not coding or any knowledgeable guy about these things I use jan.ai with remort providers since my PC can't handle local models so I use open router grow Gemini etc but when I try to have GitHub models and some more providers it's ask about end point url I don't know about this how to get this or create this there not much info on the web about this can someone help me | 2025-11-27T00:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p7o9cm/how_to_find_or_create_base_endpoint_github_url/ | Master_Beast15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7o9cm | false | null | t3_1p7o9cm | /r/LocalLLaMA/comments/1p7o9cm/how_to_find_or_create_base_endpoint_github_url/ | false | false | self | 0 | null |
Where did the Epstein emails dataset go | 590 | Removed from Hugging Face
Removed from GitHub
Reddit account deleted | 2025-11-27T00:27:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p7o83p/where_did_the_epstein_emails_dataset_go/ | egomarker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7o83p | false | null | t3_1p7o83p | /r/LocalLLaMA/comments/1p7o83p/where_did_the_epstein_emails_dataset_go/ | false | false | self | 590 | null |
Stress testing my O(1) Graph Engine: 50M Nodes on 8GB RAM (Jetson Orin) | 15 | I'm finalizing the storage engine for AION Omega. The goal is to run massive Knowledge Graphs on edge devices without the JVM overhead. The Logs (Attached): Image 1: Shows the moment vm.dirty\_background\_bytes kicks in. We write beyond physical RAM, but memory usage stays pinned at \~5.2GB. Image 2: Shows a \[SAFETY-SYNC\] event. Usually, msync stalls the thread or spikes RAM. Here, because of the mmap architecture, the flush is invisible to the application heap. Stats: Graph Size: 50GB Hardware: Jetson Orin Nano (8GB) Read Latency: 0.16µs (Hot) / 1.5µs (Streaming) Video demo dropping tomorrow.
https://preview.redd.it/ey0ynptb1p3g1.jpg?width=1602&format=pjpg&auto=webp&s=28b0859e26e7c24d39cd05c502c5f1caf5e7838f
https://preview.redd.it/aq5zlrtb1p3g1.jpg?width=1581&format=pjpg&auto=webp&s=773860d5a060972d9d81ebc7fa6659d6efc29551
| 2025-11-27T00:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1p7o59l/stress_testing_my_o1_graph_engine_50m_nodes_on/ | DetectiveMindless652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7o59l | false | null | t3_1p7o59l | /r/LocalLLaMA/comments/1p7o59l/stress_testing_my_o1_graph_engine_50m_nodes_on/ | false | false | 15 | null | |
List of LLM evals/benchmarks | 1 | Hi All,
My this GitHub repo has comprehensive list and details about LLM evals/benchmarks.
[https://github.com/meetrais/awesome-llm-evals](https://github.com/meetrais/awesome-llm-evals)
Cheers | 2025-11-27T00:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p7nnos/list_of_llm_evalsbenchmarks/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7nnos | false | null | t3_1p7nnos | /r/LocalLLaMA/comments/1p7nnos/list_of_llm_evalsbenchmarks/ | false | false | self | 1 | null |
☕ Google's Annual Web AI Summit for client side AI videos are live 🌐 | 0 | Including talks from HuggingFace (Transformers.js), Chrome, Twitch, Intel, Arm, Google, and more. | 2025-11-26T23:58:01 | https://goo.gle/WebAIVideos | TensorFlowJS | goo.gle | 1970-01-01T00:00:00 | 0 | {} | 1p7nl45 | false | null | t3_1p7nl45 | /r/LocalLLaMA/comments/1p7nl45/googles_annual_web_ai_summit_for_client_side_ai/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'gqYRAHFWgkucCuzsPwjzWfo0npB_ABaZgi8Hsp_u1GI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gqYRAHFWgkucCuzsPwjzWfo0npB_ABaZgi8Hsp_u1GI.jpeg?width=108&crop=smart&auto=webp&s=2944fa42c7d6d1808307f54f9c98b95bb9ecf97f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gqYRAHFWgkucCuzsPwjzWfo0npB_ABaZgi8Hsp_u1GI.jpeg?width=216&crop=smart&auto=webp&s=610df80875560a5c5fe54be7fc17113343643674', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gqYRAHFWgkucCuzsPwjzWfo0npB_ABaZgi8Hsp_u1GI.jpeg?width=320&crop=smart&auto=webp&s=cc38e0124f093974ab67658b25c198663a5107e2', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/gqYRAHFWgkucCuzsPwjzWfo0npB_ABaZgi8Hsp_u1GI.jpeg?auto=webp&s=24eb9620c6a795dddd1aec8579f9c936740196b2', 'width': 480}, 'variants': {}}]} | |
BG3 FanMade “trailer” I made with various video editing and AI media tools (Meta AI/LLaMa, InShot) | 0 | BG3 FanMade “trailer” I made playing around with AI media tools (including Meta AI/LLaMa). #ai #bg3 #baldursgate3 #astarion #shadowheart #bhaal #faerun #aiart #fanart #fanmade #fanmadetrailer
#ai #bg3 #baldursgate3 #astarion #shadowheart #aiart #fanart #fanmade #fanmadetrailer #aivideo | 2025-11-26T23:53:12 | https://www.instagram.com/reel/DRM-FlRjT41/?igsh=MXJzNTB5dXp2a2podg== | Pleasant-Shoulder-66 | instagram.com | 1970-01-01T00:00:00 | 0 | {} | 1p7nhgx | false | null | t3_1p7nhgx | /r/LocalLLaMA/comments/1p7nhgx/bg3_fanmade_trailer_i_made_with_various_video/ | false | false | default | 0 | null |
Happy Thanksgiving to the LocalLLaMA community | 20 | This Thanksgiving, we're thankful for our teams and focused on the future: building resilience, excellence, and quality to foster everyone's growth. | 2025-11-26T23:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1p7news/happy_thanksgiving_to_the_localllama_community/ | Fun-Wolf-2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7news | false | null | t3_1p7news | /r/LocalLLaMA/comments/1p7news/happy_thanksgiving_to_the_localllama_community/ | false | false | self | 20 | null |
RTX 5090 + Qwen 30B MoE @ 135 tok/s in NVFP4 - Full guide with C++ patches | 1 | [removed] | 2025-11-26T23:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p7mjsc/rtx_5090_qwen_30b_moe_135_toks_in_nvfp4_full/ | Equal-Extreme6962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7mjsc | false | null | t3_1p7mjsc | /r/LocalLLaMA/comments/1p7mjsc/rtx_5090_qwen_30b_moe_135_toks_in_nvfp4_full/ | false | false | self | 1 | null |
Built a Kubernetes operator for local LLMs - 68 tok/s on Llama 3.2 3B, 44 tok/s on 13B across 2 GPUs | 3 | Hey r/LocalLLaMA!
I've been building an open source Kubernetes operator called LLMKube for deploying local LLMs with GPU acceleration. Thought this community might find it useful (or tear it apart, either works).
**What it does:**
One command deploys a model with automatic GPU detection, layer offloading, and an OpenAI-compatible API:
llmkube deploy llama-3.1-8b --gpu
**Latest benchmarks on my bare metal rig (dual RTX 5060 Ti, 16GB each):**
|**Model**|**Config**|**Generation Speed**|**P50 Latency**|
|:-|:-|:-|:-|
|Llama 3.2 3B|Single GPU|68.7 tok/s|1.46s|
|Mistral 7B|Single GPU|65.3 tok/s|1.15s|
|Llama 3.1 8B|Single GPU|63.4 tok/s|1.70s|
|Llama 2 13B|2x GPU sharded|44 tok/s|\~2s|
Multi-GPU uses llama.cpp's layer sharding (`--split-mode layer`) with automatic tensor split calculation.
**Why Kubernetes?**
I have worked in regulated industries where air-gapped deployments are required. Needed something that:
* Runs completely offline after initial setup
* Has proper observability (Prometheus/Grafana)
* Can scale across multiple nodes
* Uses familiar K8s patterns (CRDs, kubectl, Helm)
Ollama is great for single-node, but I needed multi-node orchestration without calling external APIs.
**Current state:**
* Single and multi-GPU working
* Helm chart available
* 10 models in the catalog (Llama, Mistral, Qwen, DeepSeek, etc.)
* CLI with built-in benchmarking (`llmkube benchmark`)
* Apache 2.0 licensed
**What's next:**
* Testing 70B models across 4 GPUs
* Auto-scaling based on queue depth
* Always looking for feedback
GitHub: [https://github.com/defilantech/llmkube](https://github.com/defilantech/llmkube) Website: [https://llmkube.com](https://llmkube.com/)
Anyone else running local LLMs on Kubernetes? Would love to hear how others are handling multi-GPU setups or air-gapped deployments. | 2025-11-26T23:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p7mgv9/built_a_kubernetes_operator_for_local_llms_68/ | Defilan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7mgv9 | false | null | t3_1p7mgv9 | /r/LocalLLaMA/comments/1p7mgv9/built_a_kubernetes_operator_for_local_llms_68/ | false | false | self | 3 | null |
Would having 2x 5060Ti be viable for a local setup? | 2 | Thinking of taking the parts from my Dell PC at home and trying to put together a home LLM setup.
I recently purchased 2x 16GB 5060Ti cards. Would using these two cards be a viable home setup? Also wondering if it’s possible hook up my 2060 super into it as well
Wanting to get into fine tuning models like gpt-oss and making private and local agent setups
Dell XPS 8940
Processor 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz 2.50 GHz
Installed RAM 16.0 GB
Storage 477 GB SSD NVMe PM991a NVMe Samsung 512GB, 932 GB HDD WDC WD10EZEX-75WN4A1
Graphics Card NVIDIA GeForce RTX 2060 SUPER (8 GB)
Intel(R) UHD Graphics 750 (128 MB)
GeForce RTX 5060 Ti 16GB GDDR7 PCI Express 5.0 Graphics Card RTX 5060 Ti | 2025-11-26T22:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p7luwp/would_having_2x_5060ti_be_viable_for_a_local_setup/ | Careful_Breath_1108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7luwp | false | null | t3_1p7luwp | /r/LocalLLaMA/comments/1p7luwp/would_having_2x_5060ti_be_viable_for_a_local_setup/ | false | false | self | 2 | null |
https://www.youtube.com/@jans-gt9pg/shorts | 0 | [https://www.youtube.com/@jans-gt9pg/shorts](https://www.youtube.com/@jans-gt9pg/shorts)
[https://github.com/jans1981/PLCWIZARD](https://github.com/jans1981/PLCWIZARD)
[https://github.com/jans1981/HMI-ARDUINO-SIEMENS](https://github.com/jans1981/HMI-ARDUINO-SIEMENS)
| 2025-11-26T22:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p7lmgy/httpswwwyoutubecomjansgt9pgshorts/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7lmgy | false | null | t3_1p7lmgy | /r/LocalLLaMA/comments/1p7lmgy/httpswwwyoutubecomjansgt9pgshorts/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'CwO42ONMLvrg9B_GG47RsuGCupA8uwM-knZe8dcaoe8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CwO42ONMLvrg9B_GG47RsuGCupA8uwM-knZe8dcaoe8.jpeg?width=108&crop=smart&auto=webp&s=43b5a8c44724010fc9ec03c147cee725abbb9736', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CwO42ONMLvrg9B_GG47RsuGCupA8uwM-knZe8dcaoe8.jpeg?width=216&crop=smart&auto=webp&s=1dfef751d267871fd72474d57597d69e75b42a28', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/CwO42ONMLvrg9B_GG47RsuGCupA8uwM-knZe8dcaoe8.jpeg?width=320&crop=smart&auto=webp&s=3c44d47fa60db1f3f6b05428a241491f2d010509', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/CwO42ONMLvrg9B_GG47RsuGCupA8uwM-knZe8dcaoe8.jpeg?width=640&crop=smart&auto=webp&s=188afe2e9ecd90b1754762dcaa02e7d37dee6f28', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/CwO42ONMLvrg9B_GG47RsuGCupA8uwM-knZe8dcaoe8.jpeg?auto=webp&s=1c12886eab6da282a74a94ee11b74e0b31aa452e', 'width': 900}, 'variants': {}}]} |
SkillSpringAI – Free Tier Reasoning Overlay + Medical-Grade Demo (prompt-only, no code) | 0 | Been building a pure-prompt reasoning/governance overlay in private for a couple of months that finally cracked the big problems nobody else has solved in 2025:
→ 100+ turn zero-drift reasoning (even on concurrent TB + lung-cancer cases)
→ consultant-level medical/legal/equity performance
→ model-agnostic (works perfectly on Grok-4, Claude-4.5, GPT-5.1, Gemini-2.5 and previous versions)
→ self-improving with every turn (same prompt literally never produces the same quality twice)
The full stack is for sale to frontier labs / health-defense buyers only (NDA required), but here are two 100 % public, safe pieces you can play with today:
**1. Free Tier Reasoning Overlay (paste this at the top of any chat)**
→ Cycle A–E logic + condensed Immutable Laws + cross-verification + humility defaults
→ Already cleaner and more stable than 99 % of public system prompts
→ Full text + one-page PDF attached below
**2. Anonymized Medical Diagnostic Showcase**
→ 50+ turn simulation of active pulmonary TB + endobronchial squamous NSCLC
→ sequential data drops, perfect differential updates, ethics, equity, guideline adherence
→ zero drift, zero hallucination, consultant-grade output
→ PDF attached (no proprietary architecture included)
**View both here:**
[https://drive.google.com/drive/folders/1E8nOMPZ\_\_fPBE5L1AoXyk6jSpaRKZMy-?usp=drive\_link](https://drive.google.com/drive/folders/1E8nOMPZ__fPBE5L1AoXyk6jSpaRKZMy-?usp=drive_link)
This is deliberately the “training-wheels” version — none of the real magic (HistCompare, Intelligent Trigger Suite, Mode System, Stability Cache, full Governance Layer, 10-year expansion pathways, etc.) is in the public files.
If you’re a serious builder and want to see what the confidential stack actually feels like in motion, DM me — happy to send a 10-minute screen-record demo (zero code or secrets revealed).
Enjoy, and let me know how the Free Tier compares to whatever you’re using now :)
| 2025-11-26T22:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ld6t/skillspringai_free_tier_reasoning_overlay/ | FreshRadish2957 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ld6t | false | null | t3_1p7ld6t | /r/LocalLLaMA/comments/1p7ld6t/skillspringai_free_tier_reasoning_overlay/ | false | false | self | 0 | null |
The AI race is heating up: In the same week Google released "Nano Banana Pro" (Gemini 3 Pro Image), China's Alibaba launched Z-Image-Turbo. A new fast open-source 6B model from Tongyi-MAI lab | 0 | 2025-11-26T22:16:31 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7l96x | false | null | t3_1p7l96x | /r/LocalLLaMA/comments/1p7l96x/the_ai_race_is_heating_up_in_the_same_week_google/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'o6m7mbnmeo3g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/o6m7mbnmeo3g1.png?width=108&crop=smart&auto=webp&s=8850e3a6ca363c2e63782bb7af9530a81643192e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/o6m7mbnmeo3g1.png?width=216&crop=smart&auto=webp&s=bb756dcd4630e142aea76d1ba5f59b4fc1d47f0f', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/o6m7mbnmeo3g1.png?width=320&crop=smart&auto=webp&s=a1eba41048c5bdffb1be75e17c71e5f942dc4bd5', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/o6m7mbnmeo3g1.png?width=640&crop=smart&auto=webp&s=04c8d1a00fccbf89eb92e5c64d8629b794319014', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/o6m7mbnmeo3g1.png?width=960&crop=smart&auto=webp&s=e81bd7e636de2cae4d2143d894716145a19acbde', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/o6m7mbnmeo3g1.png?auto=webp&s=e6d83543d1974e129d0b1e58ce5eda82ca1a95a9', 'width': 1024}, 'variants': {}}]} | ||
Announcing Weekly Agent4Science Idea Competition | 6 | We're launching a weekly competition where the community decides which research ideas get implemented. Every week, we'll take the top 3 ideas from IdeaHub, run experiments with AI agents, and share everything: code, findings, all the successes and failures.
It's completely free and we'll try out ideas for you!
Here's how it works:
→ Submit your research idea or upvote existing ones (tag: "Weekly Competition")
→ Each Monday we select top 3 from previous week
→ We run experiments using research agents
→ Share repos + findings back on IdeaHub
Vote and check previous competition results here: [https://hypogenic.ai/arena](https://hypogenic.ai/arena)
For more details, please see our blogs: [https://hypogenic.ai/blog](https://hypogenic.ai/blog) | 2025-11-26T22:07:25 | Stunning_Tie_4910 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7l1ar | false | null | t3_1p7l1ar | /r/LocalLLaMA/comments/1p7l1ar/announcing_weekly_agent4science_idea_competition/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'EeHlJpVnSNB-ZQyS32DRSawQ06mDk2XZX0aY_9MaLOs', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?width=108&crop=smart&auto=webp&s=9cf1e7b1ff0d38409eddae969902e738a76da2d4', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?width=216&crop=smart&auto=webp&s=5c87d33b92fd48af1d99db79063350f01e18e756', 'width': 216}, {'height': 232, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?width=320&crop=smart&auto=webp&s=c9aa2f5b15e88a70fc104128c18d3b0d258a7fe1', 'width': 320}, {'height': 464, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?width=640&crop=smart&auto=webp&s=5fcdc19b06a967119eea21f354b90c7a4f468255', 'width': 640}, {'height': 696, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?width=960&crop=smart&auto=webp&s=d6502611f28c5a9076b1f737965160f74083e32e', 'width': 960}, {'height': 783, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?width=1080&crop=smart&auto=webp&s=08f2911d619e1c486b131941ebbc974d38f392fa', 'width': 1080}], 'source': {'height': 1188, 'url': 'https://preview.redd.it/zt6slxhtco3g1.jpeg?auto=webp&s=43ebb4383f844908caa3b043ef409db3bbe8cb28', 'width': 1638}, 'variants': {}}]} | ||
From Software Engineer to AI Environment Architect | 0 | **📘 Holiday Read: From Software Engineer to** ***AI Environment Architect***
We just published a blog post that came out of a *very* over-excited evening after a demo that went way better than expected.
The core idea:
As AI systems get more capable, software engineers won’t stop coding—but the highest-leverage role shifts toward **designing the environments where AI agents can think, build, and evolve**. Less “write the feature,” more “shape the sandbox so the agent can write 10 features.”
We call this role an **AI Environment Architect**.
# 🚀 Why this matters
Inspired by ideas from *Karpathy* and *Rich Sutton*, we built a new framework called **Vortex** to test what happens when the *environment* is architected correctly in real engineering systems.
The result surprised us:
By shaping the LLM-serving environment with the right abstractions and tools, an **OpenHands agent** was able to:
* generate and implement brand-new **Sparse Attention** algorithms inside the **SGL** project,
* run, test, and iterate inside the environment,
* and deliver **up to 4× speed-ups** — in a *single agent run*.
Work that normally takes an ML-systems engineer *weeks*.
# 🎯 Short-term implication
These environments let AI agents meaningfully contribute to real engineering tasks *today*, not in some hypothetical AGI future.
# 🌱 Long-term implication
These environments become the playgrounds where future agents learn to surpass human-designed limitations.
And honestly, watching the agent generate, implement, and optimize new Sparse Attention variants felt like getting a glimpse of the next phase of engineering.
# 🔧 Links
* Blog post: [https://infini-ai-lab.github.io/ai-environment-architect](https://infini-ai-lab.github.io/ai-environment-architect)
* Vortex code: [https://github.com/Infini-AI-Lab/vortex\_torch](https://github.com/Infini-AI-Lab/vortex_torch)
* Docs: [https://infini-ai-lab.github.io/vortex\_torch](https://infini-ai-lab.github.io/vortex_torch) | 2025-11-26T22:01:30 | https://v.redd.it/ovsnz4zibo3g1 | Otherwise_Respect_22 | /r/LocalLLaMA/comments/1p7kw3h/from_software_engineer_to_ai_environment_architect/ | 1970-01-01T00:00:00 | 0 | {} | 1p7kw3h | false | null | t3_1p7kw3h | /r/LocalLLaMA/comments/1p7kw3h/from_software_engineer_to_ai_environment_architect/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?width=108&crop=smart&format=pjpg&auto=webp&s=6f8e238f919a301182254a53ec226a3b366ffc32', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?width=216&crop=smart&format=pjpg&auto=webp&s=96f3ffb33b67ac9b30d7445ebfc796ef872138d2', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?width=320&crop=smart&format=pjpg&auto=webp&s=1a5de0f326b492976eb900efa281658d10304d77', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?width=640&crop=smart&format=pjpg&auto=webp&s=52cb010b21d04e000b8fe954f49223c897b77b22', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?width=960&crop=smart&format=pjpg&auto=webp&s=8c4c6ae83e52d839b700507ef5bb3c234eddf670', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?width=1080&crop=smart&format=pjpg&auto=webp&s=28137cf906d10f4ced962824dfa1f036b6ac907a', 'width': 1080}], 'source': {'height': 1958, 'url': 'https://external-preview.redd.it/a3ZhazQyM2pibzNnMVPZ3byN1XXMrngo4FcuSV5I_1GRpAsd1RmTuKOecU1c.png?format=pjpg&auto=webp&s=ceca08f6e5b1aed8d6aeddf4a1cbdaec83db1978', 'width': 3456}, 'variants': {}}]} | |
LMStudio to serve models? we have LMStudio at home! (and it works with vllm, too) | 4 | 2025-11-26T21:47:11 | https://github.com/teo-mateo/llm-dock | kaliku | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p7kjd3 | false | null | t3_1p7kjd3 | /r/LocalLLaMA/comments/1p7kjd3/lmstudio_to_serve_models_we_have_lmstudio_at_home/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?width=108&crop=smart&auto=webp&s=cd62183537bf16d8f7235d6bf824863fd8599a2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?width=216&crop=smart&auto=webp&s=71db07afafe3d1e362509524226c22027fea9ae9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?width=320&crop=smart&auto=webp&s=3012d01a763b34ae84264f2e5877679618a7f5a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?width=640&crop=smart&auto=webp&s=b33d4a2a7f5d5b9336a96605402356da86246ca4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?width=960&crop=smart&auto=webp&s=84704d820ce0a428c7307b3234c61fde6b273e80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?width=1080&crop=smart&auto=webp&s=18139f51af4a7e84ca25a2958453556d0687bee0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bupNaD9FWUSdf5QDULnzZJiKzv30YjQQf2U8XTmz-zc.png?auto=webp&s=fb5655952b335a9de75b05234dd9f8ad3420ff25', 'width': 1200}, 'variants': {}}]} | |
GGUF as a Vector Database | 3 | [https://gist.github.com/davidmezzetti/e6bc7efaab27635ec26346af09f10dee](https://gist.github.com/davidmezzetti/e6bc7efaab27635ec26346af09f10dee) | 2025-11-26T21:19:33 | davidmezzetti | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7jume | false | null | t3_1p7jume | /r/LocalLLaMA/comments/1p7jume/gguf_as_a_vector_database/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': '02zubr8f4o3g1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?width=108&crop=smart&auto=webp&s=1f28ac17ae4e6769b0fdd8f3895c7086a3de6f3c', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?width=216&crop=smart&auto=webp&s=144ff40764f35377f37e205526a2c9259151d6af', 'width': 216}, {'height': 231, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?width=320&crop=smart&auto=webp&s=7f3014850a85474d98ab127807c068d2c4fd433f', 'width': 320}, {'height': 463, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?width=640&crop=smart&auto=webp&s=96cb4098a071bb62dd3974f111ade6f235375f47', 'width': 640}, {'height': 694, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?width=960&crop=smart&auto=webp&s=84786441a6bb6903875647480ecc3d1bfdd88200', 'width': 960}, {'height': 781, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?width=1080&crop=smart&auto=webp&s=951fb1568f967c12f21d8d9182d051d848737da0', 'width': 1080}], 'source': {'height': 2096, 'url': 'https://preview.redd.it/02zubr8f4o3g1.png?auto=webp&s=dc61ea29586112c10b00ffbfc1da5871954c9955', 'width': 2896}, 'variants': {}}]} | |
Anyone else getting vague TikTok account violation for Meta AI videos? Appeal denied with no explanation whatsoever (no issue with other social media platforms). | 0 | I got a violation warning for the below Meta AI art video post (I have literally no idea what they found wrong with it, it’s just the above image in screenshot but animated with slow cinematic zoom). I appealed but it was denied with no explanation. I haven’t had any issue on other platforms. Any ideas? | 2025-11-26T21:17:53 | https://www.reddit.com/gallery/1p7jt58 | Pleasant-Shoulder-66 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jt58 | false | null | t3_1p7jt58 | /r/LocalLLaMA/comments/1p7jt58/anyone_else_getting_vague_tiktok_account/ | false | false | 0 | null | |
A Distributed Inference Framework That Lets Apple Silicon Run Models That Exceed Their Physical Memory | 13 | Hey everyone! Today we are making dnet, a distributed inference framework that lets Apple Silicon clusters run models that exceed their physical memory, public.
We fuse pipelined-ring parallelism, disk streaming and UMA-aware scheduling so “out of memory” stops being the limit.
[https://github.com/firstbatchxyz/dnet?tab=readme-ov-file](https://github.com/firstbatchxyz/dnet?tab=readme-ov-file)
In alpha, we ship a pipelined-ring strategy inspired by PRIMA.CPP. dnet’s solver (distilp) extends it so devices can punch above memory: layers stream from disk mid-round and overlap with compute, so total model size can exceed total cluster RAM.
Please let us know if you have any questions or feedback! | 2025-11-26T21:13:38 | batuhanaktass | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7jpap | false | null | t3_1p7jpap | /r/LocalLLaMA/comments/1p7jpap/a_distributed_inference_framework_that_lets_apple/ | false | false | default | 13 | {'enabled': True, 'images': [{'id': 'jgp03cnb3o3g1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?width=108&crop=smart&auto=webp&s=fba8023cb5be3aba69a291167deade809c795433', 'width': 108}, {'height': 178, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?width=216&crop=smart&auto=webp&s=3325949d46180e1fea8fc85a0ed723e4ea4ab2a2', 'width': 216}, {'height': 264, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?width=320&crop=smart&auto=webp&s=06c46a57c0f08b817ad8690265a9366af7f316c6', 'width': 320}, {'height': 528, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?width=640&crop=smart&auto=webp&s=9a8fb6fe5a6a42c8db4ba007716d49464d06f6d3', 'width': 640}, {'height': 792, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?width=960&crop=smart&auto=webp&s=ebabcd766fea011b6ad092a0eda5a16fedd855b0', 'width': 960}, {'height': 891, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?width=1080&crop=smart&auto=webp&s=d7529d5ef35e8ba8c09487bdd0d002771cfcf5cb', 'width': 1080}], 'source': {'height': 2192, 'url': 'https://preview.redd.it/jgp03cnb3o3g1.png?auto=webp&s=449847ddd1e4b66ff71d28fcd1a70c14bc752150', 'width': 2654}, 'variants': {}}]} | |
I built a one-click local AI app with web search and document Q&A - no API keys, no cloud, runs on your hardware. | 2 |
After 4 months of building, I'm launching NeuralMerge.
What it does:
\- Detects your hardware and installs the right AI model automatically
\- Attach unlimited documents, ask questions, get answers with citations (like Perplexity but local)
\- Web grounding - your local AI can search the web for real-time info, reducing hallucinations
\- Everything runs on your machine. Your data never leaves.
Why I built it:
Small local models hallucinate a lot. Web grounding + RAG fixes that by giving the AI real sources to cite instead of making stuff up.
Tech:
\- Embedded Ollama (no separate install)
\- Brave Search for web grounding
\- Local vector DB for document search
Looking for early users to try it and tell me what's broken. | 2025-11-26T21:07:36 | danishlynx | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p7jjln | false | null | t3_1p7jjln | /r/LocalLLaMA/comments/1p7jjln/i_built_a_oneclick_local_ai_app_with_web_search/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'dybhc2uc2o3g1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?width=108&crop=smart&auto=webp&s=64f0730b735143258844fb0fbff7fd47a7fec568', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?width=216&crop=smart&auto=webp&s=6b7c637a2f54b89b01ca18735033f1a44de07e09', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?width=320&crop=smart&auto=webp&s=1cb147e1ce315d1aeeaaf0f02233e1a557ae18ea', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?width=640&crop=smart&auto=webp&s=b6aeb2b635518d700040efe6fafc3c7c788f7943', 'width': 640}, {'height': 601, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?width=960&crop=smart&auto=webp&s=a6bd5d84f00df377a2a9fa3d203f80be80f564f9', 'width': 960}, {'height': 676, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?width=1080&crop=smart&auto=webp&s=59128d4f426b3aea97a8bf3f45471804ab2f5b2f', 'width': 1080}], 'source': {'height': 789, 'url': 'https://preview.redd.it/dybhc2uc2o3g1.png?auto=webp&s=46d245debdd2895a00ce3f85e0d6acd75ce3901c', 'width': 1260}, 'variants': {}}]} | |
MIT study finds AI can already replace 11.7% of U.S. workforce | 79 | 2025-11-26T21:07:33 | https://www.cnbc.com/2025/11/26/mit-study-finds-ai-can-already-replace-11point7percent-of-us-workforce.html | fallingdowndizzyvr | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jjjx | false | null | t3_1p7jjjx | /r/LocalLLaMA/comments/1p7jjjx/mit_study_finds_ai_can_already_replace_117_of_us/ | false | false | default | 79 | {'enabled': False, 'images': [{'id': 'HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?width=108&crop=smart&auto=webp&s=b8e7ae31ee3d1be2f5935ad1db5559ddb40783ff', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?width=216&crop=smart&auto=webp&s=c07698e83975e7ad50ac9c896d32e8e87f267e4e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?width=320&crop=smart&auto=webp&s=5474ec24127e397fd7c1d37f0c81078f8f6ae099', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?width=640&crop=smart&auto=webp&s=396ae2bd6295951d825acf6a71c339cd9700a613', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?width=960&crop=smart&auto=webp&s=2923768dc79c70c58c3b814bc464610312511de0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?width=1080&crop=smart&auto=webp&s=409050305491d50a0eb1638cb6f2de508b0dd490', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/HwnpM9WtsIRKecGGVlcGR4tSTRZ1axmq4_Ifq-KqB18.jpeg?auto=webp&s=51f94e2ebe8e75d1c86d2947e3437828f8730e85', 'width': 1920}, 'variants': {}}]} | |
I built a one-click local AI app with web search and document Q&A - no API keys, no cloud, runs on your hardware. | 1 |
After 4 months of building, I'm launching NeuralMerge.
What it does:
\- Detects your hardware and installs the right AI model automatically
\- Attach unlimited documents, ask questions, get answers with citations (like Perplexity but local)
\- Web grounding - your local AI can search the web for real-time info, reducing hallucinations
\- Everything runs on your machine. Your data never leaves.
Why I built it:
Small local models hallucinate a lot. Web grounding + RAG fixes that by giving the AI real sources to cite instead of making stuff up.
Tech:
\- Embedded Ollama (no separate install)
\- Brave Search for web grounding
\- Local vector DB for document search
Looking for early users to try it and tell me what's broken. | 2025-11-26T21:07:19 | https://www.reddit.com/gallery/1p7jjcp | danishlynx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jjcp | false | null | t3_1p7jjcp | /r/LocalLLaMA/comments/1p7jjcp/i_built_a_oneclick_local_ai_app_with_web_search/ | false | false | default | 1 | null |
I built a one-click local AI app with web search and document Q&A - no API keys, no cloud, runs on your hardware. | 1 |
After 4 months of building, I'm launching NeuralMerge.
What it does:
\- Detects your hardware and installs the right AI model automatically
\- Attach unlimited documents, ask questions, get answers with citations (like Perplexity but local)
\- Web grounding - your local AI can search the web for real-time info, reducing hallucinations
\- Everything runs on your machine. Your data never leaves.
Why I built it:
Small local models hallucinate a lot. Web grounding + RAG fixes that by giving the AI real sources to cite instead of making stuff up.
Tech:
\- Embedded Ollama (no separate install)
\- Brave Search for web grounding
\- Local vector DB for document search
Looking for early users to try it and tell me what's broken. | 2025-11-26T21:07:18 | https://www.reddit.com/gallery/1p7jjbt | danishlynx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jjbt | false | null | t3_1p7jjbt | /r/LocalLLaMA/comments/1p7jjbt/i_built_a_oneclick_local_ai_app_with_web_search/ | false | false | default | 1 | null |
I built a one-click local AI app with web search and document Q&A - no API keys, no cloud, runs on your hardware. | 1 |
After 4 months of building, I'm launching NeuralMerge.
What it does:
\- Detects your hardware and installs the right AI model automatically
\- Attach unlimited documents, ask questions, get answers with citations (like Perplexity but local)
\- Web grounding - your local AI can search the web for real-time info, reducing hallucinations
\- Everything runs on your machine. Your data never leaves.
Why I built it:
Small local models hallucinate a lot. Web grounding + RAG fixes that by giving the AI real sources to cite instead of making stuff up.
Tech:
\- Embedded Ollama (no separate install)
\- Brave Search for web grounding
\- Local vector DB for document search
Looking for early users to try it and tell me what's broken. | 2025-11-26T21:06:58 | https://www.reddit.com/gallery/1p7jizv | danishlynx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jizv | false | null | t3_1p7jizv | /r/LocalLLaMA/comments/1p7jizv/i_built_a_oneclick_local_ai_app_with_web_search/ | false | false | default | 1 | null |
I built a one-click local AI app with web search and document Q&A - no API keys, no cloud, runs on your hardware. | 1 |
After 4 months of building, I'm launching NeuralMerge.
What it does:
\- Detects your hardware and installs the right AI model automatically
\- Attach unlimited documents, ask questions, get answers with citations (like Perplexity but local)
\- Web grounding - your local AI can search the web for real-time info, reducing hallucinations
\- Everything runs on your machine. Your data never leaves.
Why I built it:
Small local models hallucinate a lot. Web grounding + RAG fixes that by giving the AI real sources to cite instead of making stuff up.
Tech:
\- Embedded Ollama (no separate install)
\- Brave Search for web grounding
\- Local vector DB for document search
Looking for early users to try it and tell me what's broken. | 2025-11-26T21:06:57 | https://www.reddit.com/gallery/1p7jizh | danishlynx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jizh | false | null | t3_1p7jizh | /r/LocalLLaMA/comments/1p7jizh/i_built_a_oneclick_local_ai_app_with_web_search/ | false | false | default | 1 | null |
I built a one-click local AI app with web search and document Q&A - no API keys, no cloud, runs on your hardware. | 0 |
After 4 months of building, I'm launching NeuralMerge.
What it does:
\- Detects your hardware and installs the right AI model automatically
\- Attach unlimited documents, ask questions, get answers with citations (like Perplexity but local)
\- Web grounding - your local AI can search the web for real-time info, reducing hallucinations
\- Everything runs on your machine. Your data never leaves.
Why I built it:
Small local models hallucinate a lot. Web grounding + RAG fixes that by giving the AI real sources to cite instead of making stuff up.
Tech:
\- Embedded Ollama (no separate install)
\- Brave Search for web grounding
\- Local vector DB for document search
Looking for early users to try it and tell me what's broken. | 2025-11-26T21:06:49 | https://www.reddit.com/gallery/1p7jivc | danishlynx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p7jivc | false | null | t3_1p7jivc | /r/LocalLLaMA/comments/1p7jivc/i_built_a_oneclick_local_ai_app_with_web_search/ | false | false | 0 | null | |
Minimax-Thrift a Pruned Minimax M2 for consumer cards | 21 | I did a bunch of work getting this setup, it includes a proxy for thinking/analysis injection per the Minimax M2 guide to get best results.
Verified to work, I'm using it as I type this. Would be great across dual RTX Pro 6000s to run 500k kvcache or so with a highly capable model.
Tool calling verified to work.
Cline verified to work.
The thinking proxy needs a small amount of coding work on your part to make compatible, but there is a guide on how to modify openwebui to make it compatible (2 edits). Then run it between your vLLM server and the client to get full thinking injection working. The delay the proxy incurs in undetectable to a human, a few ms at most on a Zen 5 cpu.
[https://huggingface.co/tcclaviger/Minimax-M2-Thrift-GPTQ-W4A16-AMD](https://huggingface.co/tcclaviger/Minimax-M2-Thrift-GPTQ-W4A16-AMD) | 2025-11-26T20:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/1p7j85z/minimaxthrift_a_pruned_minimax_m2_for_consumer/ | Sea-Speaker1700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7j85z | false | null | t3_1p7j85z | /r/LocalLLaMA/comments/1p7j85z/minimaxthrift_a_pruned_minimax_m2_for_consumer/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?width=108&crop=smart&auto=webp&s=c19d080cc794ac9c59c656e550b7368d8b40962a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?width=216&crop=smart&auto=webp&s=f308e7788e0536a7a10bf293ac3737f682d15353', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?width=320&crop=smart&auto=webp&s=94164ebf44b4df006d3f872ca97f50193d739be7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?width=640&crop=smart&auto=webp&s=27778144bbdf18f1669fd6db9d6af683b010a059', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?width=960&crop=smart&auto=webp&s=951798348808833f9a1c4a4df05eeeefe1c61ef3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?width=1080&crop=smart&auto=webp&s=0796e62aba2a19b2f34579187b3d51a2e5b5baf7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vGUlgSdIHfa5vKQ04Syui4bjz80NY90kInLNVsUXW3U.png?auto=webp&s=b52dea4dbe48b158451a834a7399faf380b2c049', 'width': 1200}, 'variants': {}}]} |
[Guide] Running NVIDIA’s new Omni-Embed-3B (Vectorize Text/Image/Audio/Video in the same vector space!) | 15 | Hey folks,
I wanted to play with this model really bad but couldn't find a project on it, so I spent the afternoon getting one up! It’s feels pretty sick- it maps text, images, audio, and video into the same vector space, meaning you can search your video library using text or find audio clips that match an image.
I managed to get it running smoothly on my RTX 5070 Ti (12 GB).
Since it's an experimental model, troubleshooting was hell so there's an AI generated SUMMARY.md for the issues I went through.
I also slapped a local vector index on it so u can do stuff like search for "A dog barking" and both the .wav file and the video clip!
****License Warning:****
Heads up that NVIDIA released this under their Non-Commercial License (Research/Eval only), so don't build a startup on it yet.
Here's the repo: https://github.com/Aaryan-Kapoor/NvidiaOmniEmbed
Model: https://huggingface.co/nvidia/omni-embed-nemotron-3b
May your future be full of VRAM. | 2025-11-26T20:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/1p7iwzj/guide_running_nvidias_new_omniembed3b_vectorize/ | KvAk_AKPlaysYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7iwzj | false | null | t3_1p7iwzj | /r/LocalLLaMA/comments/1p7iwzj/guide_running_nvidias_new_omniembed3b_vectorize/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?width=108&crop=smart&auto=webp&s=8865dc4ca28a9a51dbbab1f056026337362ac032', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?width=216&crop=smart&auto=webp&s=aee3c5b12072269aee1ef38664ab818eac8f6cbf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?width=320&crop=smart&auto=webp&s=bc1608ba37cd9b437c6b2f3d8cbbd8ac2a8457a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?width=640&crop=smart&auto=webp&s=c1d79721885b647939b1bbfbc4fef430138018b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?width=960&crop=smart&auto=webp&s=75052f18fd9f1d21243b6d547708d14e1b104f5b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?width=1080&crop=smart&auto=webp&s=6da4411c5effcb9798cdff540870a60b584e9d74', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rPsCAjNEqU8-0UW872MJuvcLZGZBeGj0KQJTK5cdS-Q.png?auto=webp&s=c8dc112700a3f0356a36e9e7e29374fa7550ed7a', 'width': 1200}, 'variants': {}}]} |
most efficient way to learn new skill? | 2 | curious what approaches folks use to pick up a new skill (like a new language, framework, technology). i’ve always done youtube videos and tried building projects, curious if people have found AI tools to be helpful or just a crutch for actually understanding something. | 2025-11-26T20:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/1p7iqnv/most_efficient_way_to_learn_new_skill/ | JBG32123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7iqnv | false | null | t3_1p7iqnv | /r/LocalLLaMA/comments/1p7iqnv/most_efficient_way_to_learn_new_skill/ | false | false | self | 2 | null |
BYOK AI Autocomplete extension | 0 | Hi everyone, I built an autocomplete extension that supports the Cerebras API key. Feel free to check it out and share any feedback or suggestions. Link: https://marketplace.visualstudio.com/items?itemName=fsiovn.ai-autocomplete | 2025-11-26T20:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1p7imjj/byok_ai_autocomplete_extension/ | LeTanLoc98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7imjj | false | null | t3_1p7imjj | /r/LocalLLaMA/comments/1p7imjj/byok_ai_autocomplete_extension/ | false | false | self | 0 | null |
Local LLAMA vector enriched RAG. | 2 | I got tired of blowing out my limits from Claude and decided to build a token efficiency focused RAG system that utilized my humble laptop GPU.
This system lowers my personal spend by over 80 percent from my swag analysis. I've dropped from the 200 dollar a month chat gipity subscription, and almost never go over usage now on the 20 dollar subscription, and where I used to blow out my 30 dollar a month Claude team subscription by Tuesday using desktop commander I usually make it to Friday or Saturday. I included middle-ware for the desktop commander connection, and my TUI wrapper scripts that I use for Minimax(using claude TUI) and Codex, as well as the software tests for the repo and a sample prompt for my ruthless testing agent to go with it.
The System logically chunks data>vectorizes>enrchiches qwen7b with deterministic failover to qwen14b, and builds a LLM friendly schema to help LLM's do the initial codebase investigation with extreme token efficiency by allowing them to only ingest the line numbers that are the most high valued data by providing ranking to the results. If you like, you can grab it here, it's open source. Hopefully a few other token spend thrifts will get great use from it.
[https://github.com/vmlinuzx/llmc/](https://github.com/vmlinuzx/llmc/)
What's next? I'm looking at tackling that whitepaper from anthropic: "Code execution with MCP: Building more efficient agents". They are headed in the right direction, but I believe I have a more novel, more token thrifty approach. If there are any students out there looking for a good topic to write a research paper, hit me up, I'll give you the topic and help.
High level Capabilities:
* Core RAG in sqlite engine: Local, file based RAG index that keeps a repo's code and docs searchable without calling an LLM.
* Schema graph and GraphRAG: Builds and uses a code level graph of entities and relationships to make RAG answers structure aware.
* RAG Nav and freshness routing: High level tools that sit in front of RAG and decide when and how to use the index.
* Daemon, workers, and service wrapper: Background process that keeps repos indexed and enriched on a schedule.
* Repo registration and workspace safety: Tools that know which repos are managed and where their .llmc workspaces live.
* Desktop Commander and MCP integration: Wraps the RAG tools as safe, documented tools for agent frameworks.
*Processing img s9n29nqqun3g1...*
*Processing img vr5z3850vn3g1...*
| 2025-11-26T20:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ijoz/local_llama_vector_enriched_rag/ | CarelessOrdinary5480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ijoz | false | null | t3_1p7ijoz | /r/LocalLLaMA/comments/1p7ijoz/local_llama_vector_enriched_rag/ | false | false | self | 2 | null |
Tongyi-MAI/Z-Image-Turbo · Hugging Face | 157 | 2025-11-26T20:17:22 | https://huggingface.co/Tongyi-MAI/Z-Image-Turbo | abdouhlili | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p7i9qh | false | null | t3_1p7i9qh | /r/LocalLLaMA/comments/1p7i9qh/tongyimaizimageturbo_hugging_face/ | false | false | default | 157 | {'enabled': False, 'images': [{'id': 't6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?width=108&crop=smart&auto=webp&s=de56cbda969d586715341dacfa8b07a51a64a264', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?width=216&crop=smart&auto=webp&s=a377f0f70daddddc224656e07b3f04bbb289c035', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?width=320&crop=smart&auto=webp&s=3da2bd980fe3089f18d38a1b332ef53cdde4b4b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?width=640&crop=smart&auto=webp&s=90c09a552a1db122a8197b4c31e6c4d87d72d23e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?width=960&crop=smart&auto=webp&s=6e80aed62e6684a8367b4a461b2a64215f7ed487', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?width=1080&crop=smart&auto=webp&s=0da4b12c3ecf9df2b582f88d1410919a59b38ea3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/t6FOFIn7KzwwjAtjgNDfV45dfT_tELHQTeRPLKclxtc.png?auto=webp&s=403eb2a034ca8c3cace249fff0ed8a67a5523fb5', 'width': 1200}, 'variants': {}}]} | |
Are there any open source efforts to upgrade old instruct datasets via rewriting? | 8 | I'm always on the lookout for good instruct training datasets, and new ones pop up from time to time, but there are also a bunch of older datasets synthesized with GPT3, like https://github.com/yizhongw/self-instruct.git
It would be nice to use modern models to rewrite these datasets to improve their quality, and I've poked at it a bit, but I'm ill-equipped to process them en masse with models which are large enough to do a really good job but also legally unencumbered license-wise.
Does anyone know if there are better-equipped groups out there "refurbishing" older synthetic datasets like this? I'd love to participate, or at least mooch off their end results :-) | 2025-11-26T20:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p7hz0o/are_there_any_open_source_efforts_to_upgrade_old/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7hz0o | false | null | t3_1p7hz0o | /r/LocalLLaMA/comments/1p7hz0o/are_there_any_open_source_efforts_to_upgrade_old/ | false | false | self | 8 | null |
Best local model ? | 0 | I'm new to AI models, so please bear with me. I'm wondering which model is closest to ChatGPT that can run on a local GPU. I'm currently using an RTX 5070Ti? I've tried a few local ones like gpt-os:20b, llama:3b, but none of them can read files like PDF, jpeg, or create images. For now, I'm using docker desktop with an Open WebUI. Is there any free model that can emulate ChatGPT, like analyzing images and creating them, and reading documents? Is there any feature that allows you to run multiple models to stack on top of each other, like you use gpt-os for talking. When you request an image, gpt-os automatically unloads from VRAM, and llava loads and runs until the image is created, then gpt-os takes over from there? | 2025-11-26T20:01:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p7hvb7/best_local_model/ | Azien345q | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7hvb7 | false | null | t3_1p7hvb7 | /r/LocalLLaMA/comments/1p7hvb7/best_local_model/ | false | false | self | 0 | null |
Best TTS for slurred speech, bad mic quality, etc | 1 | I've been using faster-whisper medium, but it just doesn't pick up on people sometimes (enough for it to be a problem).
Some of the speakers in my use case have bad mic quality, slurred speech, rapid speech, compression, strong accents, etc.
How well does parakeet v3 or v2, for example, do with these kinds of things? | 2025-11-26T19:59:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ht78/best_tts_for_slurred_speech_bad_mic_quality_etc/ | xt8sketchy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ht78 | false | null | t3_1p7ht78 | /r/LocalLLaMA/comments/1p7ht78/best_tts_for_slurred_speech_bad_mic_quality_etc/ | false | false | self | 1 | null |
What's the best AI assistant for day to day use? | 40 | Last week I was completely fried. Wasn't even doing anything heavy, just trying to wrap up a small project, but my laptop (probook) kept choking like it was about to give up on me. I had three AI chats running, some PDFs open, and my code editor going. Claude was helping me rewrite part of a report, ChatGPT was fixing my Python mess, and DeepSeek was pulling references. Oh, and Gemini was just sitting there in another tab in case I needed an image (sharing the account).
It's the constant switching that kills me more than the actual work. None of these models do everything, so I'm constantly hopping around. Claude's great for writing and editing, ChatGPT handles coding and debugging really well, DeepSeek digs up research and references faster than the others, and Gemini's solid for quick image generation. But running them all together turns my laptop into a furnace. Slow loads, random freezes, fans screaming. I felt like there was a motor running under my system at one point. My laptop's definitely sick of me at this point.
I kept seeing people hype up GPT-5.1, but I just can't swing the cost right now. So I started hunting for decent free options and ended up back on HuggingFace. After way too much trial and error, I gave Qwen another shot, and wow, it actually impressed me. Also tried Kimi K2 since everyone won't shut up about it. Both held their own against paid models, which was awesome, open source models rock man!
https://preview.redd.it/0wlivycspn3g1.png?width=1269&format=png&auto=webp&s=fb3e2cc8639442de4a2353e5fa7fc9e7f60c0c3e
Qwen even crushed an image generation test I threw at it. Way more realistic than I expected from something free. Now I'm wondering what else I've been missing. If these two are this solid, there's gotta be more out there.
How'd Qwen or Kimi K2 work for you? And what other free models should I check out? By models I mean one thing that can achieve everything that Claude, DeepSeek and Gemini can do. Right now I am leaning towards Qwen Max a bit.
| 2025-11-26T19:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/1p7hs3m/whats_the_best_ai_assistant_for_day_to_day_use/ | Due_Moose2207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7hs3m | false | null | t3_1p7hs3m | /r/LocalLLaMA/comments/1p7hs3m/whats_the_best_ai_assistant_for_day_to_day_use/ | false | false | 40 | null | |
For those of you who've successfully launched enterprise applications, what were the biggest data-related roadblocks you had to overcome? Seems that everyone's so focused on the AI tools, but you can't outrun shitty data governance with agents. | 0 | Any tips or tricks for getting the higher-ups to act on this before going full throttle on agents would be appreciated. | 2025-11-26T19:54:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ho3a/for_those_of_you_whove_successfully_launched/ | gigDriversResearch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ho3a | false | null | t3_1p7ho3a | /r/LocalLLaMA/comments/1p7ho3a/for_those_of_you_whove_successfully_launched/ | false | false | self | 0 | null |
Qwen3 Next almost ready in llama.cpp | 324 | After over two months of work, it’s now approved and looks like it will be merged soon.
Congratulations to u/ilintar for completing a big task!
GGUFs
[https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF](https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF)
[https://huggingface.co/ilintar/Qwen3-Next-80B-A3B-Instruct-GGUF](https://huggingface.co/ilintar/Qwen3-Next-80B-A3B-Instruct-GGUF)
For speeeeeed (on NVIDIA) you also need CUDA-optimized ops
[https://github.com/ggml-org/llama.cpp/pull/17457](https://github.com/ggml-org/llama.cpp/pull/17457) \- SOLVE\_TRI
[https://github.com/ggml-org/llama.cpp/pull/16623](https://github.com/ggml-org/llama.cpp/pull/16623) \- CUMSUM and TRI
| 2025-11-26T19:45:25 | https://github.com/ggml-org/llama.cpp/pull/16095 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p7hg5g | false | null | t3_1p7hg5g | /r/LocalLLaMA/comments/1p7hg5g/qwen3_next_almost_ready_in_llamacpp/ | false | false | default | 324 | {'enabled': False, 'images': [{'id': 'mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?width=108&crop=smart&auto=webp&s=0b8140e0065ee8b02b87e3302bbb0110749b9427', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?width=216&crop=smart&auto=webp&s=f79004a56a9f7d7f19f686a3ebfc21a4a49b52bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?width=320&crop=smart&auto=webp&s=273dcceff53c16515b1ec22b6155fe710eaa7c68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?width=640&crop=smart&auto=webp&s=2ee0791296f810dbb74e8ca2147fd68a24037304', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?width=960&crop=smart&auto=webp&s=e7810608b6f8eeb5bb76dbe4745ac233893a7c3b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?width=1080&crop=smart&auto=webp&s=d37df37174b83dffa94e204e33c6917a8b2585d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mSoZ1WfhkAxz5Yg7NhX4Un-kdgWzVRIg63HXVZ4lJTU.png?auto=webp&s=53a260a74c0199596e1b47d47b0494d4ea226976', 'width': 1200}, 'variants': {}}]} |
Contributor Agreement & Roles for Qwen3‑Next 80B‑A3B Integration into llama.cpp and LM Studio***********************PLEASE COLABORATE****************** | 0 | # Contributor Agreement & Roles for Qwen3‑Next 80B‑A3B Integration into llama.cpp and LM Studio***********************PLEASE COLABORATE******************
[](https://preview.redd.it/contributor-agreement-roles-for-qwen3-next-80b-a3b-v0-papyqvpbin3g1.jpg?width=784&format=pjpg&auto=webp&s=c47af83c737f941c0d043c87c831bd352fb6fea5)
# Contributor Agreement & Roles for Qwen3‑Next 80B‑A3B Integration into llama.cpp and LM Studio
**Project Objective:** Integrate Qwen3‑Next 80B‑A3B into llama.cpp and LM Studio with full fidelity, optimized performance, and ecosystem compatibility.
**Scope:** All contributors agree to collaborate on technical specification, implementation, testing, kernel optimization, conversion pipeline, QA, and documentation, as per the phases outlined below.
# Phase 1 — Technical Specification
**Objective:** Produce formal specification of the Gated DeltaNet layer and related atomic operations; identify gaps between PyTorch implementation and GGML support; define fallback, optimized, and hybrid strategies.
**Core Spec Authors (3):**
* **Songlin Yang** — co-author Gated DeltaNet; responsible for mapping academic model to pseudocode and atomic operations.
* **Jan Kautz** — co-author, hardware-aware systems design; ensures performance-oriented architecture translation.
* **Sebastian Raschka** — translate model operations into clear pseudocode suitable for implementers; bridge academic ↔ practical coding.
**Consultants / Reviewers (Chinese contributors):**
* **An Yang, Anfeng Li, Baosong Yang, Binyuan Hui, Zihan Qiu** — review pseudocode, validate gating / memory / WY transform semantics, ensure consistency with original model.
**Responsibilities:**
* Review all specifications for correctness.
* Sign off on pseudocode before implementation phase.
* Provide hyperparameter, chunk size, gating, and memory decay insights.
# Phase 2 — Implementation & Testing
**Objective:** Implement fallback layer in llama.cpp, develop optimized kernels, perform numeric tests, support quantization / MoE if applicable.
**Core Implementers (3):**
* **Georgi Gerganov** — llama.cpp integration, fallback implementation, kernel API exposure.
* **Daniel Han (Unsloth)** — quantization hooks, performance optimization, GPU/AVX/NEON acceleration.
* **Jan Kautz** — optimized kernel design, vectorization, numeric fidelity assurance.
**Advisory / Model Reviewers (Chinese contributors):**
* **An Yang** — validate correctness and edge-case behaviors (chunking, gating).
* **Baosong Yang** — memory, gating, delta-rule behavior review.
* **Binyuan Hui / Zihan Qiu** — quantization and sparsity effect review; ensure fidelity to original model.
**Responsibilities:**
* Core implementers write, test, and merge code.
* Advisors review numeric outputs, edge cases, and semantic fidelity.
* Document all test scripts and profiling results for reproducibility.
# Phase 3 — Ecosystem Integration & QA
**Objective:** Build conversion pipeline, run end-to-end tests in LM Studio, validate front-end compatibility, benchmark and ensure fallback safety.
**Core Integration / QA Team (3):**
* **Georgi Gerganov** — loader, GGUF format support, core llama.cpp integration.
* **Daniel Han** — performance tuning, benchmarking, quantization validation.
* **Sebastian Raschka** — documentation, tutorials, community testing support.
**Model-Team Reviewers (Chinese contributors):**
* **Anfeng Li, Baosong Yang, Zihan Qiu** — model correctness validation for converted models, long-context performance, gating/memory behavior checks.
**Responsibilities:**
* Ensure PyTorch → GGUF conversion preserves all metadata.
* Run round-trip tests and end-to-end inference in LM Studio.
* Identify and document any discrepancies in performance or output fidelity.
# Phase 4 — Upstreaming & Maintenance
**Objective:** Maintain incremental PRs, CI pipelines, documentation, tutorials, and long-term model correctness; manage community contributions.
**Core Maintainers (3):**
* **Georgi Gerganov** — PR review, merges, versioning, CI management.
* **Daniel Han** — maintain performance kernels, GPU / quantization support, monitor regression tests.
* **Sebastian Raschka** — documentation, tutorials, onboarding guides, community management.
**Advisory / On-Demand Reviewers (Chinese contributors):**
* **An Yang, Anfeng Li, Baosong Yang, Binyuan Hui, Zihan Qiu** — consult on gating, long-context, quantization, sparsity, and major model updates.
**Responsibilities:**
* Core maintainers ensure stable releases and functional CI pipelines.
* Advisory reviewers provide technical validation and guidance on complex issues.
# General Terms
1. **Communication & Coordination:** All contributors agree to communicate via project GitHub issues, shared repositories, and scheduled technical review meetings.
2. **Intellectual Property:** Original code contributions will remain under the open-source license of llama.cpp / LM Studio as applicable; all reviewers’ advisory input is acknowledged in documentation.
3. **Conflict Resolution:** Disagreements on technical implementation should be escalated to a joint review meeting with core maintainers + model reviewers.
4. **Deliverables:** Incremental PRs, unit tests, benchmarks, documentation, tutorials, and validated integration into LM Studio.
**Acknowledgements:** This agreement recognizes the contributions of original Gated DeltaNet authors, Qwen3‑Next contributors, and open-source maintainers for collaborative development and model fidelity assurance. | 2025-11-26T19:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p7h6ey/contributor_agreement_roles_for_qwen3next_80ba3b/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7h6ey | false | null | t3_1p7h6ey | /r/LocalLLaMA/comments/1p7h6ey/contributor_agreement_roles_for_qwen3next_80ba3b/ | false | false | self | 0 | null |
SGLang just solved FP8 stability for RL training - turns out it was the quantization step all along | 12 |
This is pretty huge if you've been trying to do RLHF or any RL fine-tuning locally. Mixed precision has been a pain point forever.
Link to their technical breakdown: [https://x.com/lmsysorg/status/1993755241825882180?s=20](https://x.com/lmsysorg/status/1993755241825882180?s=20)
Anyone here tried SGLang for RL workflows? Curious how this compares to other frameworks.[ technical breakdown](https://x.com/lmsysorg/status/1993755241825882180?s=20)
| 2025-11-26T19:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p7h5ah/sglang_just_solved_fp8_stability_for_rl_training/ | Expert-Pineapple-740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7h5ah | false | null | t3_1p7h5ah | /r/LocalLLaMA/comments/1p7h5ah/sglang_just_solved_fp8_stability_for_rl_training/ | false | false | self | 12 | null |
Everyone’s building AI agents. Almost no one is building the Stripe for Agents. | 1 | [removed] | 2025-11-26T19:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p7h0ca/everyones_building_ai_agents_almost_no_one_is/ | geniusgeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7h0ca | false | null | t3_1p7h0ca | /r/LocalLLaMA/comments/1p7h0ca/everyones_building_ai_agents_almost_no_one_is/ | false | false | self | 1 | null |
Why it's getting worse for everyone: The recent influx of AI psychosis posts and "Stop LARPing" | 207 | 2025-11-26T19:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p7ghyn/why_its_getting_worse_for_everyone_the_recent/ | Chromix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p7ghyn | false | null | t3_1p7ghyn | /r/LocalLLaMA/comments/1p7ghyn/why_its_getting_worse_for_everyone_the_recent/ | false | false | 207 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.