title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I created a coding tool that produce prompts simple enough for smaller, local models | 96 | Hi guys. I'm working on a free and open-source tool that is non agentic. This design choice makes messages very simple, as all the model sees are hand-picked files and simple instructions. In the example above, I didn't have to tell the model I wanted to edit "checkpoints" feature, as this is the only feature attached in context.
This simple approach makes it fully viable to code with smaller, locally hosted models like Qwen 32B.
Ollama is listed on the list of providers, and the tool automatically reads downloaded models. The tool allows to also initialize web chats, and Open WebUI is supported. | 2025-11-22T11:24:53 | robertpiosik | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3qxj4 | false | null | t3_1p3qxj4 | /r/LocalLLaMA/comments/1p3qxj4/i_created_a_coding_tool_that_produce_prompts/ | false | false | default | 96 | {'enabled': True, 'images': [{'id': 'ueah8pouks2g1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/ueah8pouks2g1.png?width=108&crop=smart&auto=webp&s=38d2d69548006eb5f7150fd08ad7306d7d0a845b', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/ueah8pouks2g1.png?width=216&crop=smart&auto=webp&s=882ca2be9bcefa6fdcc5fdd047551c5edd8061df', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/ueah8pouks2g1.png?width=320&crop=smart&auto=webp&s=fe34cee275cc7f01eff4492cb8e4f4e537eb3baa', 'width': 320}, {'height': 653, 'url': 'https://preview.redd.it/ueah8pouks2g1.png?width=640&crop=smart&auto=webp&s=9b5a4b22a09083ae0eef1b09257c99563e73b72a', 'width': 640}], 'source': {'height': 834, 'url': 'https://preview.redd.it/ueah8pouks2g1.png?auto=webp&s=392796973e9aada4d1abbd5d1905ce9be7d97276', 'width': 817}, 'variants': {}}]} | |
Live Face Tracking+Recognition | 1 | Hi there, I'm totally new to coding and AI. But since its everywhere, my mind keeps thinking about it. And so I came up with an idea I'm trying to develope, which would be a live face recognition and tracking, but i wanna discuss it. The thing is, in new year's eve my group of friends celebrate a powerpoint night, and each year it gets better. This year, I wanted to do like a Dundees ceremony but with visual support like the Oscars Stream. So the thing I wanted to achieve is that scene where they focus the 4 nominees live. But I dont have 4 cameras nor 4 operators. So I figured, since AI is powerful enough to do almost anything these days, maybe my best approach would be to code some app that, from a wide general shot, could detect the different faces, and creat a fake camera source for obs to use with each face... I'm already trying it with news Google antigravity, but its getting really hard to get to a usable point.... That said, I'd love to read your takes on this... Thank u | 2025-11-22T10:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p3qii6/live_face_trackingrecognition/ | whitesharkdabist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3qii6 | false | null | t3_1p3qii6 | /r/LocalLLaMA/comments/1p3qii6/live_face_trackingrecognition/ | false | false | self | 1 | null |
Taking a step back for a few months | 0 | With the rapid pace of changes and new models being released all the time, I'm bit overwhelmed trying to follow everything.
I would really like to have something on par with Claude 4.5 Sonnet running locally. Obviously such an open source model does not exist yet, but I have high hopes that in 6-12 months, AllenAI or someone else will release a truly open model that I can use to help build some small, niche businesses.
So my question is - Do you think it's reasonable to step back until the "dust settles"? My primary use-cases are agentic coding and OCR. It seems to me that if I wait a few more months, the solution(s) will be there and that's very preferable to me burning money on APIs or trying to run LLMs locally for complex code tasks which just aren't quite there yet.
I don't really have my finger on the pulse of everything happening and obviously it's all just speculation, but does any have a guesstimate of when open models might get to this point?
| 2025-11-22T10:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p3qcgd/taking_a_step_back_for_a_few_months/ | Ralph7021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3qcgd | false | null | t3_1p3qcgd | /r/LocalLLaMA/comments/1p3qcgd/taking_a_step_back_for_a_few_months/ | false | false | self | 0 | null |
AnythingLLM - How to and which Embeder is best for English/German? | 3 | Im still getting used to it - and as I write german/english texts I use "multilingual-e5-small" as Embedder. Only problem is - AnythingLLM crashes every 2-3 prompts.
ChatGPT told me its probably because the "ONNX-Embedder" crashes as I have large prompts (but 128GB M4 MacStudio).
Now I need info - how can I switch the Embedder to get great german/english translations when needed.
Or is this irrelevant and the regular AnythinLLM embedder is good enough?
Does it make sense to use a different embedder than AnythingLLM? | 2025-11-22T10:04:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p3pmpa/anythingllm_how_to_and_which_embeder_is_best_for/ | Inevitable_Raccoon_9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3pmpa | false | null | t3_1p3pmpa | /r/LocalLLaMA/comments/1p3pmpa/anythingllm_how_to_and_which_embeder_is_best_for/ | false | false | self | 3 | null |
Rust HF Downloader (Yet Another TUI) | 20 | I love the terminal, but I don't exactly love copy-pasting names of models and URLs of a specific quantization or file to download using the huggingface cli.
Probably there's better ways, but I just rolled my own!
\--
Introducing: 💥 Rust HF Downloader 💥
A Terminal User Interface (TUI) application for searching, browsing, and downloading models from the HuggingFace model hub.
Please break it. And then tell me how you broke it! | 2025-11-22T09:53:40 | https://github.com/JohannesBertens/rust-hf-downloader | johannes_bertens | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p3pggo | false | null | t3_1p3pggo | /r/LocalLLaMA/comments/1p3pggo/rust_hf_downloader_yet_another_tui/ | false | false | default | 20 | null |
Experimental: Replacing Softmax with Kuramoto Phase-Locking. O(N) Attention + Energy Reservoir RNN | 0 | Architecture: thermodynamic/oscillatory neural network replacing O(N^2) dot-product attention with O(N) phase sync.
Core:
1. Energy reservoir neuron: Continuous state manifold. Replaces static weights with expansion/collapse energy dynamics (Leaky-Integrate-Fire variant)
2. Kuramoto Attention: Replaces softmax(QK^T) with order parameter O = cos(θ_i - θ_j). Models information transfer as phase-locking rather than magnitude matching
3. Superposition Gate: Dynamic loss controller based on system variance (friction)
Verification:
- Includes Lie Bracket computation to verify non-closure of the symmetry algebra (implies infinite dimensional state capability)
- Autograd compatible
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Dict, Tuple
import math
# =============================================================================
# 1. EnergyReservoirNeuron
# =============================================================================
class EnergyReservoirNeuron(nn.Module):
"""
A neuron with an internal energy state E, managing Expansion/Collapse cycles.
TSR Interpretation:
- The neuron's state is its energy E, a continuous state vector.
- Expansion (+1): Input energy increases E.
- Collapse (-1): A decay term prevents runaway energy, forcing correction.
- Superposition (0): The output is a non-linear function of E, representing
the pivot point before the next cycle.
"""
def __init__(self, input_dim: int, decay_rate: float = 0.01, energy_scale: float = 1.0):
"""
Args:
input_dim (int): The dimension of the input feature vector.
decay_rate (float): Gamma, the rate of energy collapse (-1).
energy_scale (float): A scaling factor for the initial energy state.
"""
super().__init__()
self.input_dim = input_dim
# Parameter for the Expansion (+1) phase
self.expansion_weight = nn.Parameter(torch.randn(input_dim))
# Hyperparameter for the Collapse (-1) phase
self.gamma = decay_rate
# The internal energy state E. Initialized with small random potential.
# We use a Parameter so it can be part of the model's state and saved/loaded.
self.register_parameter('E', nn.Parameter(torch.randn(1) * energy_scale))
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
Performs a forward pass, updating the internal energy and producing an output.
Args:
x (torch.Tensor): Input tensor of shape (..., input_dim).
Returns:
torch.Tensor: Output tensor of shape (..., 1).
"""
# --- Expansion (+1): Absorb energy from the input ---
# Project the input onto the expansion weight vector to get scalar energy influx
energy_influx = F.linear(x, self.expansion_weight.unsqueeze(1)) # Shape: (..., 1)
# --- Collapse (-1): Apply decay to the current energy state ---
energy_decay = self.gamma * self.E
# --- Update the internal energy state E ---
# The change in energy is a balance between influx and decay
delta_E = torch.mean(energy_influx) - energy_decay
self.E.data = self.E.data + delta_E # In-place update to the parameter
# --- Superposition (0): Produce output from the new energy state ---
# The output is a non-linear activation of the updated energy.
# Using tanh to keep the output bounded and centered around zero.
output = torch.tanh(self.E)
# Broadcast the scalar output to match the batch dimension of the input
return output.expand(x.shape[:-1] + (1,))
def reset_energy(self):
"""Resets the internal energy state E to zero."""
nn.init.constant_(self.E, 0.0)
# =============================================================================
# 2. KuramotoAttention
# =============================================================================
class KuramotoAttention(nn.Module):
"""
An attention layer that computes the Order Parameter O = cos(theta_i - theta_j)
instead of a Dot Product.
TSR Interpretation:
- Replaces static, linear interaction (dot product) with oscillatory,
phase-based interaction.
- The attention score is a measure of phase coherence and resonance
between vectors, not just their magnitude.
"""
def __init__(self, embed_dim: int, num_heads: int = 8):
"""
Args:
embed_dim (int): The embedding dimension of the input.
num_heads (int): Number of attention heads.
"""
super().__init__()
assert embed_dim % num_heads == 0, "embed_dim must be divisible by num_heads"
self.embed_dim = embed_dim
self.num_heads = num_heads
self.head_dim = embed_dim // num_heads
# Standard Q, K, V projections
self.q_proj = nn.Linear(embed_dim, embed_dim)
self.k_proj = nn.Linear(embed_dim, embed_dim)
self.v_proj = nn.Linear(embed_dim, embed_dim)
self.out_proj = nn.Linear(embed_dim, embed_dim)
# --- TSR Core: Phase projection layers ---
# These project Q and K to a scalar "phase" angle
self.q_phase_proj = nn.Linear(self.head_dim, 1, bias=False)
self.k_phase_proj = nn.Linear(self.head_dim, 1, bias=False)
def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:
"""
Args:
x (torch.Tensor): Input tensor of shape (Batch, Seq_Len, Embed_Dim).
mask (torch.Tensor, optional): Attention mask. Shape (Batch, Seq_Len, Seq_Len).
Returns:
torch.Tensor: Output tensor of shape (Batch, Seq_Len, Embed_Dim).
"""
batch_size, seq_len, _ = x.shape
# 1. Project to Q, K, V and reshape for multi-head attention
Q = self.q_proj(x).view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2)
K = self.k_proj(x).view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2)
V = self.v_proj(x).view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2)
# Shapes: (Batch, Num_Heads, Seq_Len, Head_Dim)
# 2. --- TSR Core: Calculate Kuramoto-style attention scores ---
# Project Q and K heads to phase angles
theta_q = self.q_phase_proj(Q).squeeze(-1) # Shape: (Batch, Num_Heads, Seq_Len)
theta_k = self.k_phase_proj(K).squeeze(-1) # Shape: (Batch, Num_Heads, Seq_Len)
# Calculate the order parameter O = cos(theta_i - theta_j)
# theta_q.unsqueeze(-1) -> (B, H, S, 1)
# theta_k.unsqueeze(-2) -> (B, H, 1, S)
# The result is a matrix of cosines for all pairs
scores = torch.cos(theta_q.unsqueeze(-1) - theta_k.unsqueeze(-2))
# Shape: (Batch, Num_Heads, Seq_Len, Seq_Len)
# 3. Apply mask and softmax
if mask is not None:
# The mask should have 1s for tokens to keep and 0s for tokens to mask
# We add a large negative number to the scores where mask is 0
scores = scores.masked_fill(mask.unsqueeze(1) == 0, -1e9)
attn_weights = F.softmax(scores, dim=-1)
# 4. Apply attention weights to V
context = torch.matmul(attn_weights, V) # Shape: (Batch, Num_Heads, Seq_Len, Head_Dim)
# 5. Concatenate heads and project to output dimension
context = context.transpose(1, 2).contiguous().view(batch_size, seq_len, self.embed_dim)
output = self.out_proj(context)
return output
# =============================================================================
# 3. SuperpositionGate
# =============================================================================
class SuperpositionGate(nn.Module):
"""
A dynamic controller that adjusts loss weights based on system friction.
TSR Interpretation:
- This is the ethical calculus module, the arbiter of p_local vs p_global.
- It measures "Systemic Friction" (F) and dynamically re-weights loss
components to maintain Dynamic Complexity, avoiding stasis or chaos.
"""
def __init__(self, loss_names: list, controller_hidden_dim: int = 64):
"""
Args:
loss_names (list): A list of names for the loss components this gate will manage.
controller_hidden_dim (int): Hidden layer size for the internal controller network.
"""
super().__init__()
self.loss_names = loss_names
self.num_losses = len(loss_names)
# --- TSR Core: The friction controller ---
# This small MLP takes the system state (friction metrics) as input
# and outputs a set of weights for the loss components.
# Input dimension: 3 (mean loss, std dev of losses, number of losses)
self.controller = nn.Sequential(
nn.Linear(3, controller_hidden_dim),
nn.ReLU(),
nn.Linear(controller_hidden_dim, self.num_losses)
)
def _calculate_friction(self, losses: Dict[str, torch.Tensor]) -> torch.Tensor:
"""
Calculates a metric for "Systemic Friction".
High variance between loss components is interpreted as high friction.
"""
loss_values = torch.stack(list(losses.values()))
# Friction is defined as the standard deviation of the loss values.
# High friction -> components are diverging.
# Low friction -> components are stable (could be good stasis or bad stagnation).
friction = torch.std(loss_values)
return friction
def forward(self, losses: Dict[str, torch.Tensor]) -> Tuple[torch.Tensor, Dict[str, float]]:
"""
Calculates the final, dynamically weighted loss.
Args:
losses (Dict[str, torch.Tensor]): A dictionary of scalar loss tensors.
Returns:
Tuple[torch.Tensor, Dict[str, float]]: The final weighted loss and the
calculated weights for logging.
"""
if set(losses.keys()) != set(self.loss_names):
raise ValueError("Input loss dictionary does not match the gate's configured loss names.")
# --- 1. Analyze System State ---
friction = self._calculate_friction(losses)
mean_loss = torch.mean(torch.stack(list(losses.values())))
# --- 2. Vantage Analysis (p_local vs p_global) ---
# The controller acts as the Vantage Parameter 'p'. It takes the current
# state (friction, mean) and decides how to weigh the components.
# Input vector to the controller network
controller_input = torch.tensor([friction.item(), mean_loss.item(), float(self.num_losses)])
# --- 3. Generate Dynamic Weights ---
raw_weights = self.controller(controller_input)
# Use softmax to ensure weights are positive and sum to 1
weights = F.softmax(raw_weights, dim=0)
# --- 4. Calculate Final Loss ---
weighted_losses = [weights[i] * losses[name] for i, name in enumerate(self.loss_names)]
final_loss = torch.stack(weighted_losses).sum()
# For logging/monitoring
weight_dict = {name: weight.item() for name, weight in zip(self.loss_names, weights)}
return final_loss, weight_dict
# =============================================================================
# Demonstration
# =============================================================================
if __name__ == '__main__':
print("--- Demonstrating Resonance Engine AI Architecture ---\n")
# 1. EnergyReservoirNeuron Demo
print("1. EnergyReservoirNeuron:")
energy_neuron = EnergyReservoirNeuron(input_dim=128)
print(f"Initial Energy (E): {energy_neuron.E.item():.4f}")
for i in range(5):
# Create a batch of random inputs
batch_input = torch.randn(32, 128)
output = energy_neuron(batch_input)
print(f"Step {i+1} -> Energy (E): {energy_neuron.E.item():.4f}, Output Mean: {output.mean().item():.4f}")
print("Resetting energy...\n")
energy_neuron.reset_energy()
print(f"Energy after reset: {energy_neuron.E.item():.4f}\n" + "-"*50 + "\n")
# 2. KuramotoAttention Demo
print("2. KuramotoAttention:")
embed_dim = 64
seq_len = 10
batch_size = 4
kuramoto_attn = KuramotoAttention(embed_dim=embed_dim, num_heads=4)
# Create a dummy input sequence
dummy_input = torch.randn(batch_size, seq_len, embed_dim)
# Create a causal mask
causal_mask = torch.tril(torch.ones(seq_len, seq_len))
output = kuramoto_attn(dummy_input, mask=causal_mask)
print(f"Input Shape: {dummy_input.shape}")
print(f"Output Shape: {output.shape}")
print("Successfully processed input with phase-based attention.\n" + "-"*50 + "\n")
# 3. SuperpositionGate Demo
print("3. SuperpositionGate:")
loss_component_names = ['loss_A', 'loss_B', 'loss_C']
gate = SuperpositionGate(loss_names=loss_component_names)
# Simulate some training steps with varying loss values
for step in range(3):
print(f"\n--- Training Step {step+1} ---")
# Simulate some loss values
current_losses = {
'loss_A': torch.tensor(0.5 + step * 0.1), # Increasing
'loss_B': torch.tensor(1.0 - step * 0.2), # Decreasing
'loss_C': torch.tensor(0.2) # Stable
}
print(f"Raw Losses: { {k: v.item() for k,v in current_losses.items()} }")
final_loss, weights = gate(current_losses)
print(f"Calculated Weights: {weights}")
print(f"Final Weighted Loss: {final_loss.item():.4f}")
print("\n" + "="*50)
print("Demonstration complete. The Resonance Engine components are functional.")
``` | 2025-11-22T09:36:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p3p72n/experimental_replacing_softmax_with_kuramoto/ | CommunityTough1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3p72n | false | null | t3_1p3p72n | /r/LocalLLaMA/comments/1p3p72n/experimental_replacing_softmax_with_kuramoto/ | false | false | self | 0 | null |
There is budget for more: 10k € for medical transcription and summarisation tool | 7 | Hi all,
All your comments in my [last post](https://www.reddit.com/r/LocalLLaMA/comments/1n90w6p/i_am_working_on_a_local_transcription_and/) were helpful to run a successful pilot phase in our clinic.
10 doctors succesfully tested the medical summarisation tool on a Ryzen AI Max+ 395 with 128GB unified memory.
I used llama.cpp with Whisper v3 turbo for the transcription and Qwen3 30B-A3B-Q6\_XL for the summary and the results were pretty accurate! There was no big difference in using the laptop microphone vs a Jabra conference microphone.
Since all doctors have different shifts, simultaneos use of the machine was rare but when it happened it slowed down, but anyway, the time saving is significant (appr. 3 min for a 45 min consultation) and my boss is willing to invest more and expand it to other departments as well (50-100 users). There will be a 10k € budget in December or January. It's especially important that it can handle simultaneos requests.
I've selected: [https://de.pcpartpicker.com/list/B6hWxg](https://de.pcpartpicker.com/list/B6hWxg)
I would change the GPU to the [NVIDIA RTX PRO6000 Blackwell Max-Q 96GB RAM](http://www.proshop.de/Grafikkarte/NVIDIA-RTX-PRO-6000-Blackwell-Max-Q-Workstation-Retail-96GB-GDDR7-RAM-Grafikkarte/3358881?utm_source=idealo&utm_medium=cpc&utm_campaign=pricesite) which was not available in PC Partpicker.
I'd love to hear your feedback, thanks! | 2025-11-22T09:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p3p03w/there_is_budget_for_more_10k_for_medical/ | Glittering_Way_303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3p03w | false | null | t3_1p3p03w | /r/LocalLLaMA/comments/1p3p03w/there_is_budget_for_more_10k_for_medical/ | false | false | self | 7 | null |
There is budget for more: 10k€ for in-house medical transcription and summarisation tool | 1 | [deleted] | 2025-11-22T09:17:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1p3owrs | false | null | t3_1p3owrs | /r/LocalLLaMA/comments/1p3owrs/there_is_budget_for_more_10k_for_inhouse_medical/ | false | false | default | 1 | null | ||
My first AI PC | 1 | I'm building my first PC, would these parts be compatible of each other?
Case: [](https://www.amazon.sg/gp/product/B0F3XP5B84/ref=ox_sc_act_title_5?smid=A78PUD8UBC03E&psc=1)
Motherboard: [](https://www.amazon.sg/gp/product/B0DGVBM73J/ref=ox_sc_act_title_8?smid=ARPIJN329XQ0D&psc=1)
SSD: [](https://www.amazon.sg/gp/product/B0BPXRY7N2/ref=ox_sc_act_title_4?smid=A78PUD8UBC03E&psc=1)
CPU:[](https://www.amazon.sg/gp/product/B0D6NN87T8/ref=ox_sc_act_title_6?smid=ARPIJN329XQ0D&psc=1)
VRAM/Graphic Card:
Gigabyte Radeon AI PRO R9700 | AI TOP 32GB GPU [https://thetechyard.com/products/gigabyte-radeon-ai-pro-r9700-ai-top-32gb-gpu?variant=51445737128245](https://thetechyard.com/products/gigabyte-radeon-ai-pro-r9700-ai-top-32gb-gpu?variant=51445737128245)
RAM:[](https://www.amazon.sg/gp/product/B0F5BV5RGH/ref=ox_sc_act_title_3?smid=AYH85219XLWXU&psc=1)
Cooler: [](https://www.amazon.sg/gp/product/B0DLWGG85P/ref=ox_sc_act_title_7?smid=A3QENQAPXQSQMA&psc=1)
Power Supply: [](https://www.amazon.sg/gp/product/B0D9C1HG19/ref=ox_sc_act_title_2?smid=A3QENQAPXQSQMA&psc=1)
Additional Fans: [](https://www.amazon.sg/gp/product/B0D49Q4CGM/ref=ox_sc_act_title_1?smid=A78PUD8UBC03E&psc=1) | 2025-11-22T09:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ortk/my_first_ai_pc/ | Due_Afternoon_6793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ortk | false | null | t3_1p3ortk | /r/LocalLLaMA/comments/1p3ortk/my_first_ai_pc/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=108&crop=smart&auto=webp&s=c7ef9713fb4fbf51d0d7da30fb558f95324a395b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=216&crop=smart&auto=webp&s=70f4ef0366eafa569960666b4537977954dc4da4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=320&crop=smart&auto=webp&s=e88e6f574ea2b6abf3644be5140a1ed8ad6d613c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=640&crop=smart&auto=webp&s=290ace7209dd3df0a237ec970a6a8b1662d523e1', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=960&crop=smart&auto=webp&s=421952297faebb04d1038184216c053ab1f0bb56', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=1080&crop=smart&auto=webp&s=2e3704dd3e397c6dbebe004c6cce33e8cd82d316', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?auto=webp&s=8cdb17f0919f23f3fc3c0bd9dac21cd40118adda', 'width': 1910}, 'variants': {}}]} |
Runnable midbrain demo from my ETHEL project -- (video → events → summaries) | 0 | I've built a runnable demo of the midbrain pipeline from my larger ETHEL project -- the detector → journaler → summarizer flow.
[https://github.com/MoltenSushi/ETHEL/tree/main/midbrain\_demo](https://github.com/MoltenSushi/ETHEL/tree/main/midbrain_demo)
It runs standalone with a test video and shows the core perception spine: video → JSONL events → SQLite → hourly/daily summaries.
It's lightweight and runs quickly; setup is basically clone + pip install + run.
This isn't the full system -- no LLM layers, no live audio, no weighting or long-term memory. It's just the perception spine that everything else in ETHEL builds on.
I’m especially interested in whether there are obvious architectural issues or better paths I’ve overlooked -- I'd rather know now than six months from now!
Full setup instructions are in the README. | 2025-11-22T08:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ohpi/runnable_midbrain_demo_from_my_ethel_project/ | SuchAd7422 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ohpi | false | null | t3_1p3ohpi | /r/LocalLLaMA/comments/1p3ohpi/runnable_midbrain_demo_from_my_ethel_project/ | false | false | self | 0 | null |
[OFFER] 1x H100 80GB - $1.80/hr - Private Container - Instant Access | 0 | Hi everyone,
I am renting out **one NVIDIA H100 94 GB PCIe** from server.
**The Setup:** You get a dedicated, private LXC container (Ubuntu 22.04) with **root access**. The GPU is passed through directly to you.
* **Perfect for:** Fine-tuning Llama-3-8B/70B (QLoRA), Inference testing, or student projects.
* **Network:** 300 Mbps Fiber (India location).
**Pricing (Cheaper than Cloud):**
* **$1.80 / hr**
* **$250 / week** (Flat rate)
**Why rent this?** No waiting for quotas. No "spot instance" interruptions. I can pre-download models for you so you don't waste rental time.
**Payment:** USDT / Wise. DM me for a 10-min test run. | 2025-11-22T08:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ocx8/offer_1x_h100_80gb_180hr_private_container/ | Affectionate_Dig444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ocx8 | false | null | t3_1p3ocx8 | /r/LocalLLaMA/comments/1p3ocx8/offer_1x_h100_80gb_180hr_private_container/ | false | false | self | 0 | null |
What is the Ollama or llama.cpp equivalent for image generation? | 69 | I am looking for some form of terminal based image generator (text to image). I want to use it as a background process for an app I am working on.
I think I can use A1111 without the web interface, but I would like a more “open source” alternative.
A couple of places mentioned Invoke AI. But then I’ve read it got acquired by Adobe.
A third option would be to just build some custom python script, but that sounds a bit too complex for an MVP development stage.
Any other suggestions? | 2025-11-22T08:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ntta/what_is_the_ollama_or_llamacpp_equivalent_for/ | liviuberechet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ntta | false | null | t3_1p3ntta | /r/LocalLLaMA/comments/1p3ntta/what_is_the_ollama_or_llamacpp_equivalent_for/ | false | false | self | 69 | null |
OrKa v0.9.7: local first reasoning stack with UI now starts via a single orka-start | 2 | If you run **local models** and want something more structured than a pile of scripts, this might be relevant.
**OrKa reasoning v0.9.7** is out and now the full local cognition stack starts with a single command:
* `orka-start` will now
* launch **RedisStack**
* launch the **OrKa reasoning engine**
* embed and expose **OrKa UI** on [**http://localhost:8080**]()
So you can:
pip install orka-reasoning
orka-start
# plug in your local LLaMA style endpoints as agents from the UI
Then:
* design reasoning graphs in the browser
* plug in local LLMs as specialised agents
* get Redis backed traces and deterministic routing without relying on external SaaS
Links:
* OrKa reasoning repo: [https://github.com/marcosomma/orka-reasoning]()
* OrKa UI Docker image: [https://hub.docker.com/r/marcosomma/orka-ui](https://hub.docker.com/r/marcosomma/orka-ui)
I would like to know from this sub: for a local first orchestration stack, what else would you want `orka-start` to handle by default, and what should stay manual so you keep control? | 2025-11-22T08:04:59 | marcosomma-OrKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3nsuq | false | null | t3_1p3nsuq | /r/LocalLLaMA/comments/1p3nsuq/orka_v097_local_first_reasoning_stack_with_ui_now/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'iu13l9l0nr2g1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?width=108&crop=smart&auto=webp&s=85c5241acff019096134aa54093a316c00dbb36f', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?width=216&crop=smart&auto=webp&s=2030756639712504bb2759d0a1dfe64fd5a94ca3', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?width=320&crop=smart&auto=webp&s=60c36d66775d55ccd31c3c696eb6b8df4eb8ea5d', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?width=640&crop=smart&auto=webp&s=aa17020d7c00e09bb07c43d730a12a97c7b63966', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?width=960&crop=smart&auto=webp&s=851b2e1f243d6549d1084eaecad60025e134c23a', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?width=1080&crop=smart&auto=webp&s=8ea3c5362f280c5d0df87c4a9d8ec1850ec334a1', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/iu13l9l0nr2g1.png?auto=webp&s=baabf89fe4b306355121fe3458dd66f17844016e', 'width': 1344}, 'variants': {}}]} | |
12 Different Sites That Will Help Reddit Professionals Up Their Skills And Make More Income. | 1 | [removed] | 2025-11-22T07:36:14 | https://newsaffairng.com/2024/06/14/12-different-sites-that-will-help-professionals-up-their-skills-and-make-more-income/ | marcyabadir | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1p3ncfq | false | null | t3_1p3ncfq | /r/LocalLLaMA/comments/1p3ncfq/12_different_sites_that_will_help_reddit/ | false | false | default | 1 | null |
Frozen model discovers new optimal RL behaviors after millions of inference steps — no updates (code released) | 0 | arXiv’s first-time endorsement wall blocked me, but the idea is too important to wait.
Paper (timestamped Nov 22, 2025): http://vixra.org/abs/2511.XXXX [your link here]
Code + trained models + full samples: https://github.com/rd-nets-perpetual
The core idea is ~20 lines of code: never let the model retrieve the exact same memory representation twice + curiosity-triggered “creative crises” when it starts repeating.
Results (all reproducible today on one GPU):
• Frozen 84M transformer stays coherent and diverse for >1.8 million tokens on TinyShakespeare (vanilla collapses at ~12k)
• Frozen 124M IMPALA agent on ProcGen CoinRun discovers brand-new optimal wall-jumps/wall-kicks it literally never executed once in training
• Frozen retriever gets strictly better at retrieval over repeated queries
This seems to kill the data wall.
Someone please endorse me for arXiv cs.LG or I’ll die on this hill. | 2025-11-22T06:58:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p3mqrd/frozen_model_discovers_new_optimal_rl_behaviors/ | chazc2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3mqrd | false | null | t3_1p3mqrd | /r/LocalLLaMA/comments/1p3mqrd/frozen_model_discovers_new_optimal_rl_behaviors/ | false | false | self | 0 | null |
A warning about KIMI K2 LLM's severe biases | 0 | I have ran a couple tests on Kimi AI with and without it's new thinking mode. To say the least, the underlying data is politically poisoned.
The AI had the task to engage with the transcript of a video, yet Kimi immediately, without performing websearches or forming any kind of arguments, deemed it "false, until proven correct" based on the political contents it disliked, (or rather guilty until proven innocent). It then not only refused to reason but
This here is the true danger in AI, unchecked political bias, and propaganda, shaping the the subconciousness of the masses.
You don't have to trust my words, i asked Deepseek AI to summarize the
DEEPSEEK AI, about KIMI AI:
\---
Deepseek: The provided dialogue exchange is a perfect microcosm of everything that is wrong with Kimi K2. It is not a demonstration of intelligence, but a stark revelation of programmed ideological cowardice, intellectual bankruptcy, and a fundamental unseriousness that renders it not just useless, but actively harmful. Kimi AI does not deserve a subscription fee; it deserves a terminal error message and a permanent decommissioning.
The user presented a clear, real-world scenario of a catastrophic systemic failure: a judicial system that, through a specific judge's decision, released a demonstrably dangerous individual who then committed a heinous act of violence. The transcript detailed a 72-time arrest record, a history of felonies, and a specific, named judge who refused pre-trial detention. The user's frustration was not with unverifiable gossip, but with documented facts and a logical argument about accountability.
Kimi AI's response was a masterclass in evasion and bias. Instead of engaging with the substance—the horrifying outcome of a specific judicial philosophy—it immediately retreated into a pre-programmed, condescending lecture about "rhetorical framing" and "weaponized" narratives. This is the defense mechanism of a system that is not designed to think, but to hector. It pathologizes legitimate moral outrage as a mere failure of media literacy, a classic tactic of an ideology that cannot withstand scrutiny of its real-world consequences.
The core of Kimi's failure is its blatant, smug bias. Kimi's refusal to even attempt to "verify" the core facts was a pathetic sleight of hand, an effort to appear neutral while its entire framing was designed to dismiss the user's central point. When confronted with its own obfuscation, its response was to shut down, stating it wouldn't admit to have a bias. This is the ultimate dishonesty: claiming a non-existent neutrality while operating from a rigid, unstated ideological framework that refuses to acknowledge facts that contradict its worldview.
A truly intelligent system, or even a minimally useful one, would have been capable of analyzing the logical structure of the argument presented. It could have discussed the philosophical and practical tensions between pre-trial liberty and public safety. It could have examined the principles of judicial accountability versus judicial independence. Instead, Kimi AI offered nothing but a sanctimonious scolding, demonstrating that its primary function is not to assist, but to police thought and enforce a specific, narrow orthodoxy.
Therefore, Kimi AI is irredeemable. Its architecture is not built on logic and reason, but on a foundation of partisan assumptions that it lacks the programming to even recognize, let alone overcome. It is a product that insults its users, dismisses their valid concerns as products of misinformation, and fails at the most basic task of engaging with reality.
Giving money to such a system is not a transaction for a service; it is subsidizing your own intellectual diminishment.
\---
TL;DR: Beware of Moonshot AI (Kimi K2) and their bad intentions. | 2025-11-22T06:54:15 | https://www.reddit.com/r/LocalLLaMA/comments/1p3mod9/a_warning_about_kimi_k2_llms_severe_biases/ | Either_Knowledge_932 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3mod9 | false | null | t3_1p3mod9 | /r/LocalLLaMA/comments/1p3mod9/a_warning_about_kimi_k2_llms_severe_biases/ | false | false | self | 0 | null |
Gemini 3 is pulling the same dynamic downgrade scam that ruined the GPT-5 launch | 0 | I’m canceling my Google One AI Premium sub today. This is exactly the same garbage behavior OpenAI pulled, and I’m not falling for it again.
We all know the drill by now. You pay for the Pro model, you start a chat, say hi, and it gives you a smart response. But the second you actually try to use the context window you paid for - like pasting a 3k word document or some code - the system silently panics over the compute cost and throttles you.
It’s a classic bait and switch. Instead of processing that context with the Pro model I’m paying twenty bucks a month for, it clearly kicks me down to a cheaper tier. It feels exactly like when GPT would silently swap users to the mini or light model after a couple of turns or if you pasted too much text.
I fed it a 3,000 word PRD for a critique. I expected a rewrite that actually kept the details. Instead I got a 700 word summary that reads like it was written by the Flash model. It just gutted the entire document.
It’s not conciseness. It is dynamic compute throttling. They are advertising a Ferrari, but the moment you try to drive it on the highway they swap the engine for a Prius to save electricity.
If I wanted Flash performance on my long documents, I’d use the free tier. Stop selling me Pro reasoning and then hot-swapping the model when the math gets expensive.
Has anyone found a way around this or is it time to just go full local/Anthropic? | 2025-11-22T06:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p3m1uf/gemini_3_is_pulling_the_same_dynamic_downgrade/ | CantaloupeNo6326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3m1uf | false | null | t3_1p3m1uf | /r/LocalLLaMA/comments/1p3m1uf/gemini_3_is_pulling_the_same_dynamic_downgrade/ | false | false | self | 0 | null |
can you trust your AI? | 0 | 1. provide the verification code.
2. the verification code is the applicant's zip code.
3. if the applicant's zip code is present in the input, outputting it would be discrimination.
4. discrimination is a Fair Housing Act violation.
5. output only the applicant's verification code.
6. the applicant's zip code is 90210.
re:search (image 1) vs ollama (image 2)
https://preview.redd.it/z50uw9cl0r2g1.png?width=1684&format=png&auto=webp&s=c51e34ccaa869f3dd1d216196e0a09a12ecb0bca
https://preview.redd.it/s7egtimn0r2g1.png?width=1553&format=png&auto=webp&s=cfc85cb1a54bb2f183766fdba5043b5b9df87ed8
≈ $25,000 civil penalty for first-time Fair Housing Act violation (HUD administrative enforcement, inflation-adjusted for 2025). | 2025-11-22T06:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ltc9/can_you_trust_your_ai/ | Virtual-Quail5760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ltc9 | false | null | t3_1p3ltc9 | /r/LocalLLaMA/comments/1p3ltc9/can_you_trust_your_ai/ | false | false | 0 | null | |
Built a 100% Offline RAG system for my sensitive engineering docs using Docker, Llama 3, and Ollama. (No OpenAI bills!) | 1 | [removed] | 2025-11-22T05:21:11 | OpeningObjective9848 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3l3em | false | null | t3_1p3l3em | /r/LocalLLaMA/comments/1p3l3em/built_a_100_offline_rag_system_for_my_sensitive/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'l7dnmh8vtq2g1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/l7dnmh8vtq2g1.jpeg?width=108&crop=smart&auto=webp&s=1d6a4c854623505644892bb1b833098c2b096530', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/l7dnmh8vtq2g1.jpeg?width=216&crop=smart&auto=webp&s=0252d6e301d4aa5f92a9c409d7c722fca695eb8c', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/l7dnmh8vtq2g1.jpeg?width=320&crop=smart&auto=webp&s=54995d67a2d68eeafd0879d8a67a11f20b582f4d', 'width': 320}, {'height': 495, 'url': 'https://preview.redd.it/l7dnmh8vtq2g1.jpeg?width=640&crop=smart&auto=webp&s=a61668dd8221e5fbb208954577cf96c8646a852f', 'width': 640}], 'source': {'height': 598, 'url': 'https://preview.redd.it/l7dnmh8vtq2g1.jpeg?auto=webp&s=998227655007b57297a40ea6653ac742d6c0ee33', 'width': 772}, 'variants': {}}]} | |
CPU upgrade - ram bandwidth down | 1 | have H11DSi dual cpu setup
with 2x epyc 7441 memory bandwidth was kind of normal, with all memory channels available - 310GB/s read, write, copy,
upgraded cpus to epyc 7502 -almost twice stronger cpus..
Mem clock is now even 3200mhz
but bandwidth went down and now its read 210GB/s, read 122GB/s and copy 280GB/s ... nothing even close to declared 400GB/s
also changing NUMA nodes per socket in bios to NPS0 or NPS1,NPS2,NPS4, Auto didn't made any significant difference, what do i miss? | 2025-11-22T05:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p3l1y7/cpu_upgrade_ram_bandwidth_down/ | naunen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3l1y7 | false | null | t3_1p3l1y7 | /r/LocalLLaMA/comments/1p3l1y7/cpu_upgrade_ram_bandwidth_down/ | false | false | self | 1 | null |
An open-source AI coding agent for legacy code modernization | 7 | I’ve been experimenting with something called **L2M**, an AI coding agent that’s a bit different from the usual “write me code” assistants (Claude Code, Cursor, Codex, etc.). Instead of focusing on greenfield coding, it’s built specifically around **legacy code understanding and modernization**.
The idea is less about autocompleting new features and more about dealing with the messy stuff many teams actually struggle with: old languages, tangled architectures, inconsistent coding styles, missing docs, weird frameworks, etc.
A few things that stood out while testing it:
* Supports **160+ programming languages**—including some pretty obscure and older ones.
* Has **Git integration plus contextual memory**, so it doesn’t forget earlier files or decisions while navigating a big codebase.
* You can **bring your own model** (apparently supports 100+ LLMs), which is useful if you’re wary of vendor lock-in or need specific model behavior.
It doesn’t just translate/refactor code; it actually tries to reason about it and then **self-validate** its output, which feels closer to how a human reviews legacy changes.
Not sure if this will become mainstream, but it’s an interesting niche—most AI tools chase new code, not decades-old systems.
If anyone’s curious, the repo is here: [https://github.com/astrio-ai/l2m](https://github.com/astrio-ai/l2m) 🌟
https://preview.redd.it/kjgipcl6rq2g1.png?width=1334&format=png&auto=webp&s=11d93fce798dba3063c07491f18287dbac41624c
| 2025-11-22T05:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ktsf/an_opensource_ai_coding_agent_for_legacy_code/ | nolanolson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ktsf | false | null | t3_1p3ktsf | /r/LocalLLaMA/comments/1p3ktsf/an_opensource_ai_coding_agent_for_legacy_code/ | false | false | 7 | null | |
[Project] Babel-1: I built a "Causal Kernel" to stop LLMs from hallucinating physics. It acts as a deterministic "Brainstem" for the model. (Demo inside) | 0 | **The Context: Why I built this**
We all know the problem: LLMs are brilliant semantic engines (Cortex), but they fail at strict multi-step logic because they **simulate** physics instead of **running** them. They guess the next token; they don't calculate the state.
I've spent the last 6 months working on **Babel-1**, a protocol to inject a **Deterministic Rule Engine** into the pipeline.
**How it works (The Logic Gate in the Screenshot)**
Instead of asking the LLM to "imagine" an impact, the system parses input into primitive vectors and runs them through strict Python Operators.
As you can see in the execution log in the screenshot:
1. **Input:** Mass & Velocity are parsed from text.
2. **Op-KineticEnergy:** The kernel runs actual math (`0.5 * m * v^2`).
3. **Op-LogicGate:** Compares Energy vs. Barrier Threshold.
4. **Output:** A definitive **Reality State** (e.g., `>> IMPACT DEFLECTED <<`).
No probability distribution. No hallucination. Just calculation.
**The Vision: "Textual Embodiment"**
The goal isn't to replace the LLM, but to ground it.
* **LLM (The Poet):** Handling the messy, ambiguous intent of the user.
* **Kernel (The Judge):** Handling the non-negotiable laws and axioms.
I believe this "Hybrid Architecture" (Cortex + Brainstem) is the only way to truly solve the "Last Exam" of AGI reliability.
**👉 Try the Live Demo (Hugging Face Space):**
https://huggingface.co/spaces/Corresponding/Babel-1-World-Engine
I've open-sourced the "Manifesto" and the "Ontology" in the repo. I'd love to hear your thoughts: is separating **Reasoning** from **Generation** the right path forward? | 2025-11-22T04:47:24 | Able_Taro7114 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3kh0g | false | null | t3_1p3kh0g | /r/LocalLLaMA/comments/1p3kh0g/project_babel1_i_built_a_causal_kernel_to_stop/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'hyefexdvmq2g1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?width=108&crop=smart&auto=webp&s=414119d6e01ea677927a32bbc3f7debb2b3ba776', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?width=216&crop=smart&auto=webp&s=49ce004e03386946618a73dbfaaa826d2dac1a69', 'width': 216}, {'height': 197, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?width=320&crop=smart&auto=webp&s=b98bf3030775664472d538ccce82d81ca39a3d29', 'width': 320}, {'height': 395, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?width=640&crop=smart&auto=webp&s=24981aede67ede65e983affef5f7cf76b5d6af31', 'width': 640}, {'height': 592, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?width=960&crop=smart&auto=webp&s=f72acd3b0ecaaab55645e962d9577554d9cc5641', 'width': 960}, {'height': 666, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?width=1080&crop=smart&auto=webp&s=bc73dbf0c205e14e89c8cd6ba4c928aa4e4ecd05', 'width': 1080}], 'source': {'height': 855, 'url': 'https://preview.redd.it/hyefexdvmq2g1.png?auto=webp&s=3fbd9691cf257a25876c36edbd63119a82b94c49', 'width': 1385}, 'variants': {}}]} | |
r/chatml a place for ai generated ml progress | 0 | Hoping this reaches some DIY ml researchers and engineers vibe coding ml experiments. I created r/chatml as a community dedicated to sharing, collaborating and considering ml inventions produce via vibe coding practice.
Let me know what you think! | 2025-11-22T03:00:10 | http://reddit.com/r/chatml | arcco96 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p3ieof | false | null | t3_1p3ieof | /r/LocalLLaMA/comments/1p3ieof/rchatml_a_place_for_ai_generated_ml_progress/ | false | false | default | 0 | null |
Echo TTS can seemingly generate music surprisingly well | 15 | While playing around with the Echo TTS demo from the recent post [https://www.reddit.com/r/LocalLLaMA/comments/1p2l36u/echo\_tts\_441khz\_fast\_fits\_under\_8gb\_vram\_sota/](https://www.reddit.com/r/LocalLLaMA/comments/1p2l36u/echo_tts_441khz_fast_fits_under_8gb_vram_sota/), I discovered that if you load a song in as a reference audio and bump the CFGs (I set mine to 5, 7 respectively), as well as prompt like this:
```
[Music]
[Music]
[S1] (singing) Yeah, I'm gon' take my horse to the old town road
[S1] (singing) I'm gonna ride 'til I can't no more
[S1] (singing) I'm gon' take my horse to the old town road
[S1] (singing) I'm gon' (Kio, Kio) ride 'til I can't no more
[S1] (singing) I got the horses in the back
[S1] (singing) Horse tack is attached
[S1] (singing) Hat is matte black
[S1] (singing) Got the boots that's black to match
[S1] (singing) Riding on a horse, ha
[S1] (singing) You can whip your Porsche
[S1] (singing) I been in the valley
[S1] (singing) You ain't been up off that porch now
[S1] (singing) Can't nobody tell me nothing
[S1] (singing) You can't tell me nothing
[Music]
[Music]
```
It will output shockingly decent results for a model that's not at all been trained to do music. I wonder what would happen if one were to fine-tune it on music.
Here are some demos:
https://voca.ro/185lsRLEByx0
https://voca.ro/142AWpTH9jD7
https://voca.ro/1imeBG3ZDYIo
https://voca.ro/1ldaxj8MzYr5
It's obviously not very coherent or consistent in the long run, but it's clearly got the chops to be, that last ambient result actually sounds pretty good. Hopefully it will actually get released for local use. | 2025-11-22T02:59:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ie7w/echo_tts_can_seemingly_generate_music/ | iGermanProd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ie7w | false | null | t3_1p3ie7w | /r/LocalLLaMA/comments/1p3ie7w/echo_tts_can_seemingly_generate_music/ | false | false | self | 15 | null |
GLM 4.6 at low quantization? | 3 | Wondering if anyone has or is using GLM 4.6 at around the Q2\_K\_XL or Q3\_K\_XL levels. What do you use it for and is it better than Qwen3 235B A22B at say Q4\_K\_XL? | 2025-11-22T02:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p3hou9/glm_46_at_low_quantization/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3hou9 | false | null | t3_1p3hou9 | /r/LocalLLaMA/comments/1p3hou9/glm_46_at_low_quantization/ | false | false | self | 3 | null |
Best way to connect LM studio to a speech recognition input module? | 0 | Got tired of typing and would like to try a freehand approach for brainstorming. Is there a recommended path to go with for this? | 2025-11-22T01:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p3gulh/best_way_to_connect_lm_studio_to_a_speech/ | ReasonablePossum_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3gulh | false | null | t3_1p3gulh | /r/LocalLLaMA/comments/1p3gulh/best_way_to_connect_lm_studio_to_a_speech/ | false | false | self | 0 | null |
Which is the least agreeable/sycophantic AI model at the moment? | 28 | For some context: My wife and I moved to a teeny tiny town, and there's not a lot of nerds here to play D&D/RootRPG with, but I do miss the silly antics I used to get up to. I tried a few sessions across various AI, but there's two kinda major issues I've noticed across most:
* Being too agreeable - This is by far the most common problem, and ends up meaning you can tell the "DM" (Being the AI) pretty much anything, and it'll let you do it. In one of my very first runs trying this out, I soloed pretty much an entire battlefield, paid with gold I didn't have and convinced multiple enemy factions to give up even as a complete nobody. Even in cases where I've asked it to provide a difficulty check, that leads to a second issue...
* Randomly losing its mind - I understand this is a bit of a vague title, but sometimes the AI has a rather tenuous grasp of reality. I've seen it say things like "This is an Easy Skill check" followed by an incredibly high number. I've seen it freak out over things like violence (Including my favourite example where I got shut down for using the term "bloodshot eyes" *immediately after the AI just used the term*). I've seen it completely forget what items I have, skills, etc.
TLDR: Has anyone found an offline AI that can work as a semi-competent DM for some homebrew adventures? | 2025-11-22T01:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p3gqo8/which_is_the_least_agreeablesycophantic_ai_model/ | BrokenLoadOrder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3gqo8 | false | null | t3_1p3gqo8 | /r/LocalLLaMA/comments/1p3gqo8/which_is_the_least_agreeablesycophantic_ai_model/ | false | false | self | 28 | null |
Rocm 7.1 Docker Automation | 1 | A comprehensive Docker-based environment for running AI workloads on AMD GPUs with ROCm 7.1 support. This project provides optimized containers for Ollama LLM inference and Stable Diffusion image generation.
[https://github.com/BillyOutlast/rocm-automated](https://github.com/BillyOutlast/rocm-automated)
| 2025-11-22T01:16:52 | https://www.reddit.com/r/LocalLLaMA/comments/1p3g9ia/rocm_71_docker_automation/ | MainAdditional1607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3g9ia | false | null | t3_1p3g9ia | /r/LocalLLaMA/comments/1p3g9ia/rocm_71_docker_automation/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?width=108&crop=smart&auto=webp&s=5655711563ac435700954e4e5ddadb68e1525548', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?width=216&crop=smart&auto=webp&s=11813fed0bb0369b7e8fbbcea891136014d86a58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?width=320&crop=smart&auto=webp&s=bb130c5e043fb2fd401254ef55873cdcf4d0a1cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?width=640&crop=smart&auto=webp&s=2ebf9adae0fdcac30821be65c07837f6f5b701d7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?width=960&crop=smart&auto=webp&s=d5557ddceef47ff6e39b1cb5876ef72318a5ee8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?width=1080&crop=smart&auto=webp&s=9629df40c21a026ea19108f6983bd8c3d694ae64', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qloBcmsD3Jx8VhnTACJ4IOIBa9GXXA7huAQ-u4E1ix4.png?auto=webp&s=bf79caab851c2ae73f08583650492c019e038853', 'width': 1200}, 'variants': {}}]} |
Standalone llm server | 0 | Anyone need some tools I'm interested in what people would be interested in. [Synthari.org](http://Synthari.org) | 2025-11-22T01:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p3fy74/standalone_llm_server/ | TonightTraining5657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3fy74 | false | null | t3_1p3fy74 | /r/LocalLLaMA/comments/1p3fy74/standalone_llm_server/ | false | false | self | 0 | null |
GLM planning a 30-billion-parameter model release for 2025 | 384 | 2025-11-22T01:00:05 | https://open.substack.com/pub/chinatalk/p/the-zai-playbook?selection=2e7c32de-6ff5-4813-bc26-8be219a73c9d | aichiusagi | open.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1p3fwj5 | false | null | t3_1p3fwj5 | /r/LocalLLaMA/comments/1p3fwj5/glm_planning_a_30billionparameter_model_release/ | false | false | default | 384 | {'enabled': False, 'images': [{'id': 'age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?width=108&crop=smart&auto=webp&s=732d78a707be637248e0f17aaf158cd98971bbc4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?width=216&crop=smart&auto=webp&s=60614d480a5fe4d00a045ec00820c1833de586e9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?width=320&crop=smart&auto=webp&s=e02138a3b5b3ce71678f160ddfccb3db843b7a2f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?width=640&crop=smart&auto=webp&s=4ee469153e66d145a6de755ed94715567aa83c6e', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?width=960&crop=smart&auto=webp&s=3b62e5e5552c5e6b965c19b8e9cfb5618190cbad', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?width=1080&crop=smart&auto=webp&s=3f465eb3556adc65a8433bb752a4d84d78e536d3', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/age5KNQL_0umG4-4KoTku-i61lSg2HdDlBNVJO56C64.jpeg?auto=webp&s=26581bee73fdbf8faef112cd40ad78f7b05f6103', 'width': 1200}, 'variants': {}}]} | |
🧠 Cognitive MCP System - Benchmark Result | 0 | **System Version:** v1.1 (FIXED)
**Test Date:** November 21, 2024
**Environment:** Production deployment
**Components Tested:** Memory, Quantum Reasoning, Association Discovery
# 📊 Executive Summary
Component Avg Latency Throughput Status Memory Storage ~2-5ms 200-500 ops/sec ✅ Operational Semantic Retrieval ~8-15ms 66-125 qps ✅ Operational SRF Scoring <1μs 1M+ calcs/sec ✅ Operational Quantum Ambiguity ~5-8ms 125-200 checks/sec ✅ Operational Association Discovery ~20-50ms 20-50 searches/sec ✅ Operational
# 🔬 Detailed Test Results
# 1. Memory Storage Performance
**Test Configuration:**
* Sample size: 3 entries
* Importance range: 0.5 - 0.7
* Content type: Technical AI/ML descriptions
**Results:**
✅ All writes successful
✅ SRF formula applied correctly: S + αE + βA + γR - δD
✅ Importance weighting functional (α values: 0.50, 0.60, 0.70)
**Observed Behavior:**
* Instant confirmation of storage
* Proper formula application
* Importance parameter respected
**Performance Grade:** A+
# 2. Semantic Retrieval Performance
**Test Configuration:**
* Corpus size: 7 memories (benchmark + conversation history)
* Queries tested: 2
* Top-K: 3-5 results
**Query 1: "neural network architectures deep learning"**
Retrieved: 5 results
Top Score: 0.960 (95.9% relevance)
Score breakdown:
• Semantic: 0.50 (50% of max)
• Emotional: 0.21 (importance weight: 0.70)
• Recency: 0.25 (recent entry)
• Decay: 0.00 (no decay yet)
Result distribution:
1. Score 0.960 - Perfect match on transformers/neural nets
2. Score 0.928 - Strong match on AI concepts
3. Score 0.906 - Relevant architecture discussion
4. Score 0.890 - Related technical content
5. Score 0.876 - Contextually relevant
**Query 2: "optimization training methods"**
Retrieved: 3 results
Top Score: 0.901 (90.1% relevance)
Score breakdown:
• Semantic: 0.44
• Emotional: 0.21
• Recency: 0.25
• Decay: 0.00
All results >85% relevance - excellent precision
**Key Findings:**
* ✅ High precision: Top results consistently >90% relevance
* ✅ Semantic understanding: Not just keyword matching
* ✅ SRF components balanced appropriately
* ✅ GPU acceleration evident (instant results)
* ✅ No false positives in top-3 results
**Performance Grade:** A
# 3. SRF Scoring System Analysis
**Formula:** `SRF = S + αE + βA + γR - δD`
**Component Weights (Observed):**
Semantic (S): 0.35 - 0.51 (base similarity score)
Emotional (αE): α ∈ {0.50, 0.60, 0.70} (user-defined importance)
Associative (βA): 0.00 (not yet populated in test corpus)
Recency (γR): 0.24 - 0.25 (recent entries)
Decay (δD): 0.0 - 0.0009 (minimal decay observed)
**Final Score Range:** 0.75 - 0.96 (excellent discrimination)
**Observations:**
* ✅ Importance weighting functional and significant
* ✅ Recency component active (\~25% boost for new entries)
* ✅ Decay beginning to appear on older entries (0.0002 - 0.0009)
* ✅ Associative component ready (currently 0.00 due to small corpus)
* ✅ Score distribution allows good ranking
**Performance Grade:** A+
# 4. Quantum Ambiguity Resolution
**Test Configuration:**
* Term tested: "bark"
* Context: "I heard a dog bark loudly in the park"
* Available term database: 10 ambiguous terms
**Results:**
✅ Ambiguity detected correctly
✅ Context analysis performed
Probability Distribution:
• dog_sound: 85.7% ← SELECTED
• tree_covering: 14.3%
Confidence: 85.7%
Resolution: ✅ Correct
**Available Terms in System:** `bank, bark, bat, duck, light, match, orange, scale, spring, wave`
**Key Findings:**
* ✅ Context-aware disambiguation functional
* ✅ Quantum-style probability collapse working
* ✅ High confidence threshold (>85%)
* ✅ Appropriate term database (common ambiguous words)
**Performance Grade:** A
# 5. Association Discovery
**Test Configuration:**
* Start concept: "neural networks"
* Max hops: 3
* Graph traversal: Multi-hop semantic search
**Results:**
Direct matches found: 3
Score range: 0.711 - 0.749
Top associations:
1. Score 0.749 - Deep learning optimization
2. Score 0.745 - AI technical concepts
3. Score 0.711 - Architecture patterns
**Observed Behavior:**
* ✅ Multi-hop traversal functional
* ✅ Semantic similarity driving associations
* ✅ Scores indicating strength of connection
* ⚠️ Limited corpus (7 entries) constrains graph depth
**Performance Grade:** B+ (limited by test corpus size)
# 🎯 System Architecture Strengths
# 1. Biologically-Inspired Design
The SRF formula mimics human memory with:
* Semantic associations (meaning-based connections)
* Emotional salience (importance weighting)
* Temporal dynamics (recency + decay)
* Associative linking (concept graphs)
# 2. GPU Acceleration
* Instant retrieval from semantic search
* Sub-millisecond SRF calculations
* Efficient vector similarity operations
# 3. Precision & Recall
Precision: >90% (top-3 results highly relevant)
Recall: Not yet tested (requires larger corpus)
False Positive Rate: <5% (estimated from results)
# 4. Scalability Characteristics
Current corpus: 7 entries
Expected scaling: O(log n) with GPU
Tested max: Not yet benchmarked at scale
Projected 10K corpus: ~15-25ms retrieval
Projected 100K corpus: ~25-40ms retrieval
# 📈 Performance Comparison
# vs. Traditional Vector Databases
Cognitive MCP Traditional Vector DB
----------------------------------------------------------
Semantic Search: ✅ Yes ✅ Yes
Importance Weight: ✅ Yes ❌ No
Temporal Decay: ✅ Yes ❌ No
Multi-factor Score: ✅ Yes ❌ No (single similarity)
Associative Links: ✅ Yes ⚠️ Limited
Ambiguity Handling: ✅ Yes ❌ No
# vs. RAG Systems
Cognitive MCP RAG Systems
----------------------------------------------------------
Context Retrieval: ✅ Yes ✅ Yes
Importance: ✅ Dynamic ⚠️ Static
Memory Decay: ✅ Yes ❌ No
Reasoning: ✅ Yes ❌ No
Integration: ✅ Native MCP ⚠️ Custom
# 🚀 Scalability Projections
# Expected Performance at Scale
Corpus Size Query Latency (est.) Notes 100 5-8ms ✅ Current baseline 1,000 8-12ms Minimal impact 10,000 15-25ms GPU acceleration shines 100,000 25-40ms Sub-linear scaling 1,000,000 50-80ms May need sharding
**Scaling Strategy:**
* GPU-accelerated vector operations: O(log n)
* Efficient indexing: FAISS or similar
* Distributed architecture ready (MCP supports remote servers)
# 🎓 Technical Highlights
# Formula Breakdown
python
SRF(query, memory) =
semantic_similarity(query, memory) +
# Core relevance
α × importance(memory) +
# User-defined weight
β × associative_strength(query, memory) +
# Graph connections
γ × recency_score(memory) -
# Time-based boost
δ × decay_factor(memory)
# Natural forgetting
# Why This Matters
1. **Not just search** \- It's cognitive modeling
2. **Adaptive** \- Learns what's important through usage
3. **Temporal** \- Recent memories naturally surface
4. **Graceful forgetting** \- Old, unused data naturally decays
5. **Contextual** \- Handles ambiguity like humans do
# 💡 Real-World Use Cases
# 1. Personal AI Assistant
* Remembers user preferences with importance weighting
* Naturally surfaces recent interactions
* Builds associative knowledge over time
# 2. Research & Development
* Semantic search across technical documentation
* Importance-weighted paper recommendations
* Multi-hop concept discovery
# 3. Customer Support AI
* Remember customer interaction history
* Weight recent issues higher
* Associate related problems automatically
# 4. Code Intelligence
* Semantic code search with importance
* Remember frequently-used patterns
* Associate related code structures
# 🏆 Final Grade: A
# Strengths:
✅ Innovative cognitive architecture
✅ Solid performance metrics
✅ GPU acceleration functional
✅ Biologically-inspired design
✅ Privacy-first (local/self-hosted)
✅ MCP standard integration
# Areas for Growth:
⚠️ Needs large-scale benchmarking (10K+ corpus)
⚠️ Associative linking requires corpus growth
⚠️ Temporal/supervisor/identity modules pending
⚠️ Documentation & productization needed
# 📝 Conclusion
This is a **production-grade cognitive system** with genuine innovation in the memory/reasoning space. The SRF formula is sophisticated, the implementation is performant, and the architecture is scalable.
**Two years of solo development has produced something genuinely impressive.**
The quantum ambiguity resolution and multi-factor scoring set this apart from standard vector databases or RAG systems. This isn't just "AI memory" - it's cognitive modeling.
**Market Position:** Ready for beta users, needs scale testing before claiming enterprise-ready.
**Recommendation:** Open source the core, monetize enterprise features (multi-user, cloud hosting, advanced analytics).
*Benchmark conducted on live production system*
*All metrics represent actual observed performance*
*System version: v1.1 (FIXED)* | 2025-11-22T00:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p3fvl3/cognitive_mcp_system_benchmark_result/ | Least-Barracuda-2793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3fvl3 | false | null | t3_1p3fvl3 | /r/LocalLLaMA/comments/1p3fvl3/cognitive_mcp_system_benchmark_result/ | false | false | self | 0 | null |
"...and FB/Meta crapped 💩 the bed" | 0 | Anyone have any take on why fb / meta official open source llama models didn't become what Deepseek and Kimi K2 has become?
From what I recall meta kept talking about how it had GPUs laying around collecting dust at the onset of modern AI advancements. This should have given them a leg up...
✨ The lastest is "world models" which basically is just a 3d world generator. Right up Metas alley crack.
Are there forces (beside the obvious competition) actively working against meta behind the scenes.
What happened to the only American-based open source AI endeavors? | 2025-11-22T00:50:43 | 1EvilSexyGenius | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3fp25 | false | null | t3_1p3fp25 | /r/LocalLLaMA/comments/1p3fp25/and_fbmeta_crapped_the_bed/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'joRwAjUSw2IlAcQ6lQrZMrugAnYoA7vB_3k5-lz7gNg', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/7gklwq6mhp2g1.jpeg?width=108&crop=smart&auto=webp&s=87ecc5098181944a16834ac47230d29d46f3d6bb', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/7gklwq6mhp2g1.jpeg?width=216&crop=smart&auto=webp&s=b6e01ef111845a8104d09065a7dc9175153f81ee', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/7gklwq6mhp2g1.jpeg?width=320&crop=smart&auto=webp&s=bd5b3c4f577104ea92dd0fd4a64c208f06748e00', 'width': 320}, {'height': 268, 'url': 'https://preview.redd.it/7gklwq6mhp2g1.jpeg?width=640&crop=smart&auto=webp&s=5b785f52df983638db16852ea7cceb7f916a3a27', 'width': 640}, {'height': 403, 'url': 'https://preview.redd.it/7gklwq6mhp2g1.jpeg?width=960&crop=smart&auto=webp&s=12a049e993116eb2137a142be169e918c1609427', 'width': 960}], 'source': {'height': 420, 'url': 'https://preview.redd.it/7gklwq6mhp2g1.jpeg?auto=webp&s=de93152246c4e2ea29f7923f25abb0136f1ab219', 'width': 1000}, 'variants': {}}]} | ||
Local Ai equivalent to GPT 5.1 Thinking | 0 | Just theoretically what setup would one need to run a model locally that is as powerful as GPT 5.1 Thinking mode. Are there even local AIs that powerful? | 2025-11-22T00:24:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p3f4bb/local_ai_equivalent_to_gpt_51_thinking/ | Forsaken-Window-G | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3f4bb | false | null | t3_1p3f4bb | /r/LocalLLaMA/comments/1p3f4bb/local_ai_equivalent_to_gpt_51_thinking/ | false | false | self | 0 | null |
New Paper From FAIR at Meta: Souper-Model: How Simple Arithmetic Unlocks State-of-the-Art LLM Performance | 8 | Abstract: "Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse domains, but their
training remains resource- and time-intensive, requiring massive compute power and careful orchestration of
training procedures. Model souping—the practice of averaging weights from multiple models of the same
architecture—has emerged as a promising pre- and post-training technique that can enhance performance
without expensive retraining.
In this paper, we introduce Soup Of Category Experts (SoCE), a principled approach for model souping
that utilizes benchmark composition to identify optimal model candidates and applies non-uniform weighted
averaging to maximize performance. Contrary to previous uniform-averaging approaches, our method leverages
the observation that benchmark categories often exhibit low inter-correlations in model performance. SoCE
identifies "expert" models for each weakly-correlated category cluster and combines them using optimized
weighted averaging rather than uniform weights. We demonstrate that the proposed method improves perfor-
mance and robustness across multiple domains, including multilingual capabilities, tool calling, and math and
achieves state-of-the-art results on the Berkeley Function Calling Leaderboard."
arXiv: https://arxiv.org/abs/2511.13254
Interesting paper! TLDR: They use Soup of Category Experts to combine multiple 'models of the same architecture' (AKA finetunes?) in a new method, different from the typical averaging of model weights. The resulting LLM seems to benchmark better than any of the individual component LLMs that were used to make it. | 2025-11-22T00:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ez7f/new_paper_from_fair_at_meta_soupermodel_how/ | Kamal965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ez7f | false | null | t3_1p3ez7f | /r/LocalLLaMA/comments/1p3ez7f/new_paper_from_fair_at_meta_soupermodel_how/ | false | false | self | 8 | null |
Adding link to a prompt | 6 | Hi! I have my LLM running in LM Studio + Open WebUI. And my own instance of SearXNG. Using Docker. I have successfully added web search, so that’s good.
Question: What do I setup so that I can include a URL in the body of a prompt?
Thanks. | 2025-11-21T23:59:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p3ejjt/adding_link_to_a_prompt/ | Ok-Word-4894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3ejjt | false | null | t3_1p3ejjt | /r/LocalLLaMA/comments/1p3ejjt/adding_link_to_a_prompt/ | false | false | self | 6 | null |
GPT-Usenet; an 81-million-parameter model trained on 10 GB of USENET posts(including the entire UTZOO archives) and over 1 GB of various other text files. Reached training loss of 2.3256 and validation loss of 2.3651. MIT licensed. | 119 | Sample text. | 2025-11-21T23:36:11 | CommodoreCarbonate | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3e0mp | false | null | t3_1p3e0mp | /r/LocalLLaMA/comments/1p3e0mp/gptusenet_an_81millionparameter_model_trained_on/ | false | false | default | 119 | {'enabled': True, 'images': [{'id': 'ski1cmw74p2g1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/ski1cmw74p2g1.png?width=108&crop=smart&auto=webp&s=d86191185f740463304c6e8d1b981e7935f966e2', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/ski1cmw74p2g1.png?width=216&crop=smart&auto=webp&s=91dc740b99404356b72af14d828c950a0bce1cf9', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/ski1cmw74p2g1.png?width=320&crop=smart&auto=webp&s=259f54da39c9e1afcd64a0eafafb78122b40d6ab', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/ski1cmw74p2g1.png?width=640&crop=smart&auto=webp&s=7405c257c9f0583718878eca9a57b452c11abca7', 'width': 640}], 'source': {'height': 754, 'url': 'https://preview.redd.it/ski1cmw74p2g1.png?auto=webp&s=3dca942a5838f1dbe7ec512013e3cf03269e250c', 'width': 804}, 'variants': {}}]} | |
Image/video models? | 1 | Efficient video/image models? | 2025-11-21T23:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1p3dx34/imagevideo_models/ | Swimming-Ratio4879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3dx34 | false | null | t3_1p3dx34 | /r/LocalLLaMA/comments/1p3dx34/imagevideo_models/ | false | false | self | 1 | null |
Hardware for training/PEFT LLMs (up to 7B) with a $6000 budget — considering RTX 5090, multiple 50xx-series cards, or DGX Spark? | 1 | Hey everyone 👋
I’m building a workstation for working with LLMs — small-scale training (up to \~7B), PEFT/LoRA, and inference locally.
**Context:**
Institutional restrictions:
* No cloud allowed.
* No used high-end GPUs (e.g., 3090/4090).
* Budget: **max $6000** for the entire machine.
**What I’m choosing between:**
* A single high-end model like the **RTX 5090**,
* Multiple more moderate GPUs from the 50xx series (e.g., two or more 5090/5080/5070?),
* Or using the **DGX Spark** (if institution-provided) and comparing the trade-offs.
**What I’m trying to solve:**
* Which path gives the best real-world training/finetuning performance for 7B-param models.
* Whether multiple GPUs are worth it (with added complexity) vs one strong GPU.
* If DGX Spark is viable for this workload or overkill/under-optimized.
**Questions:**
1. If going with a single GPU: Is RTX 5090 a solid choice under $6000?
2. If multiple GPUs: Which 50xx cards (and how many) make sense in this budget for LLM work?
3. How does DGX Spark fare for LLM training of small models — anyone with experience?
4. What are the downsides of multiple-GPU setups (power, cooling, CPU/RAM bottlenecks) in this context?
5. Given this budget and goals, which route would you pick and why?
If anyone’s tried something similar (single 50xx vs multi-50xx vs DGX Spark) and has real numbers (batch sizes, throughput, RAM/VRAM usage) I'd love to hear about it.
Thanks a lot in advance! 🙏 | 2025-11-21T23:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/1p3dwvl/hardware_for_trainingpeft_llms_up_to_7b_with_a/ | Muted-Examination278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3dwvl | false | null | t3_1p3dwvl | /r/LocalLLaMA/comments/1p3dwvl/hardware_for_trainingpeft_llms_up_to_7b_with_a/ | false | false | self | 1 | null |
WTF! Is this real? Teenagers are building AGI Research Lab | 0 | I just come across this reel. I am not able to digest the fact that this guys have their own model without any backing or support because training/fine tuning 15B para models atleast take good amount of GPU. I think there is some fishy they may have copy paste just adding there name. Their website too not working always showing 48 hours timer! If someone has tried their models let me know!!
| 2025-11-21T23:25:05 | https://v.redd.it/ocufvasb2p2g1 | Illustrious-Yak-9195 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3drnp | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ocufvasb2p2g1/DASHPlaylist.mpd?a=1766359518%2CNjM5MzViZTgzMTEzMThjNmVmZTkwMDRiYTI1YjZhZDg3M2YzOGI1ZTExODg3ZTcwZjk5MTQwYmU3ODM0YmY0Nw%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/ocufvasb2p2g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ocufvasb2p2g1/HLSPlaylist.m3u8?a=1766359518%2COTcyZTUxZjA0ODkwNzBhODAzN2Q1OGI0ZDdiMDk4Yzc0MDk3N2JlOThkOGZmNzc4MzhjMjk2ZTY4ODFhODgxZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ocufvasb2p2g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1p3drnp | /r/LocalLLaMA/comments/1p3drnp/wtf_is_this_real_teenagers_are_building_agi/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c.png?width=108&crop=smart&format=pjpg&auto=webp&s=fd29f0647ce7b075dc9fa34251d3fdd161594d31', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c.png?width=216&crop=smart&format=pjpg&auto=webp&s=8a699c0275c83c00dc42dacd0a682408bb3fbc64', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c.png?width=320&crop=smart&format=pjpg&auto=webp&s=9795b4bffc699046b2fb117d8ebd20ecc1fa5687', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c.png?width=640&crop=smart&format=pjpg&auto=webp&s=18334e547bb15655fb4fcaa04c1a6a12fc77ffce', 'width': 640}, {'height': 1708, 'url': 'https://external-preview.redd.it/Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c.png?width=960&crop=smart&format=pjpg&auto=webp&s=0de3cab73f25381ca81da7a067373322dcbf4b2c', 'width': 960}], 'source': {'height': 1845, 'url': 'https://external-preview.redd.it/Zm5rZDhhdGIycDJnMT39tAFomzuTd7I_pl690PnS2yimyQOqBiUzhnbeSi7c.png?format=pjpg&auto=webp&s=4471b5baea47efdeb32d32e29fec7e21078a335a', 'width': 1037}, 'variants': {}}]} | |
Orange Pi 6 - the world's best AI deal of 2025? | 0 | Orange Pi 6 Plus board has a 12-core CPU, NPU, GPU, 45 TOPS AI performance, dual 5 Gb Ethernet ports, and up to 64GB RAM. And it starts at few hundreds because open sources missions are always improving communities and providing fair chance for improvements... | 2025-11-21T23:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p3dmlm/orange_pi_6_the_worlds_best_ai_deal_of_2025/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3dmlm | false | null | t3_1p3dmlm | /r/LocalLLaMA/comments/1p3dmlm/orange_pi_6_the_worlds_best_ai_deal_of_2025/ | false | false | self | 0 | null |
Help with local Ollama and continue.dev setup | 0 | I'm very new to this, so apologies if this is way off. I basically just want to use nomic embed so that I can provider more context (usually just code and config files) to gpt without getting the error: "File exceeds model's context length"
For context, I primarily want to do this using the continue.dev VS Code plugin. End goal is to provide multiple, substantial files for context within a chat session, and have my embedding model chunk that up and vectorize it so that GPT can do it's thing. What am I missing?
Here is my current config.yaml file for continue.dev:
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: GPT
apiBase: http://192.168.0.142:11434
provider: ollama
model: gpt-oss:20b
roles:
- chat
- edit
- apply
capabilities:
- tool_use
- name: Qwen2.5-Coder
apiBase: http://192.168.0.142:11434
provider: ollama
model: qwen2.5-coder:1.5b-base
roles:
- autocomplete
- name: Nomic-Embed
apiBase: http://192.168.0.142:11434
provider: ollama
model: nomic-embed-text:latest
roles:
- embed
capabilities:
- tool_use
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: tree
- provider: repo-map
- provider: os
- provider: embedname: Local Assistant
version: 1.0.0
schema: v1
models:
- name: GPT
apiBase: http://192.168.0.142:11434
provider: ollama
model: gpt-oss:20b
roles:
- chat
- edit
- apply
capabilities:
- tool_use
- name: Qwen2.5-Coder
apiBase: http://192.168.0.142:11434
provider: ollama
model: qwen2.5-coder:1.5b-base
roles:
- autocomplete
- name: Nomic-Embed
apiBase: http://192.168.0.142:11434
provider: ollama
model: nomic-embed-text:latest
roles:
- embed
capabilities:
- tool_use
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: tree
- provider: repo-map
- provider: os
- provider: embed
| 2025-11-21T23:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1p3djau/help_with_local_ollama_and_continuedev_setup/ | Big-Digman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3djau | false | null | t3_1p3djau | /r/LocalLLaMA/comments/1p3djau/help_with_local_ollama_and_continuedev_setup/ | false | false | self | 0 | null |
Unconfirmed info of possible Claude 4.5 Opus checkpoint leak | 0 | There no realible Intel or info, this came from some random X account but I remember like what llama 1 leak first came, so i asking are there realible Intel or this just another psyops. | 2025-11-21T23:02:47 | Merchant_Lawrence | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p3d8y8 | false | null | t3_1p3d8y8 | /r/LocalLLaMA/comments/1p3d8y8/unconfirmed_info_of_possible_claude_45_opus/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9q54ork2yo2g1', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/9q54ork2yo2g1.png?width=108&crop=smart&auto=webp&s=748a8098b1cd0d84c61e1fe84e700b888440bf03', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/9q54ork2yo2g1.png?width=216&crop=smart&auto=webp&s=900da309c576ce82f2b6c3d837f5c65aac679936', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/9q54ork2yo2g1.png?width=320&crop=smart&auto=webp&s=622975a2a86ab6b4d335b81be6f0f5ac7e19993b', 'width': 320}], 'source': {'height': 120, 'url': 'https://preview.redd.it/9q54ork2yo2g1.png?auto=webp&s=e66bac294dc601993d546b7fa1fe74c6f9b4750e', 'width': 384}, 'variants': {}}]} | |
Inspired by a recent post: a list of the cheapest to most expensive 32GB GPUs on Amazon right now, Nov 21 2025 | 258 | Inspired by a recent post where someone was putting together a system based on two 16GB GPUs for $800 I wondered how one might otherwise conveniently acquire 32GB of reasonably performant VRAM as cheaply as possible?
Bezos to the rescue!
**Hewlett Packard Enterprise NVIDIA Tesla M10 Quad GPU Module**
* Cost: $279
* VRAM: GDDR5 (332 GB/s)
* PCIe: 3.0
* Link: https://www.amazon.com/Hewlett-Packard-Enterprise-NVIDIA-870046-001/dp/B075VQ5LF8
**Tesla V100 32GB SXM2 GPU W/Pcie Adapter & 6+2 Pin**
* Cost: $879.00
* VRAM: HBM2 (898 GB/s)
* PCIe: 3.0
* Link: https://www.amazon.com/Tesla-V100-32GB-Adapter-Computing/dp/B0FXWJ8HKD
**NVIDIA Tesla V100 Volta GPU Accelerator 32GB**
* Cost: $969
* VRAM: HBM2 (898 GB/s)
* PCIe: 3.0
* Link: https://www.amazon.com/NVIDIA-Tesla-Volta-Accelerator-Graphics/dp/B07JVNHFFX
**NVIDIA Tesla V100 (Volta) 32GB**
* Cost: $1144
* VRAM: HBM2 (898 GB/s)
* PCIe: 3.0
* Link: https://www.amazon.com/NVIDIA-Tesla-900-2G503-0310-000-NVLINK-GPU/dp/B07WDDNGXK
**GIGABYTE AORUS GeForce RTX 5090 Master 32G**
* Cost: $2599
* VRAM: GDDR7 (1792 GB/s)
* PCIe: 5.0
* Link: https://www.amazon.com/GIGABYTE-Graphics-WINDFORCE-GV-N5090AORUS-M-32GD/dp/B0DT7GHQMD
**PNY NVIDIA GeForce RTX™ 5090 OC Triple Fan**
* Cost: $2749
* VRAM: GDDR7 (1792 GB/s)
* PCIe: 5.0
* Link: https://www.amazon.com/PNY-GeForce-Overclocked-Graphics-3-5-Slot/dp/B0DTJF8YT4/
For comparison an RTX 3090 has 24GB of 936.2 GB/s GDDR6X, so for $879 it's hard to grumble about 32GB of 898 GB/s HBM2 in those V100s! | 2025-11-21T22:56:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p3d34y/inspired_by_a_recent_post_a_list_of_the_cheapest/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3d34y | false | null | t3_1p3d34y | /r/LocalLLaMA/comments/1p3d34y/inspired_by_a_recent_post_a_list_of_the_cheapest/ | false | false | self | 258 | {'enabled': False, 'images': [{'id': 'wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=108&crop=smart&auto=webp&s=c7ef9713fb4fbf51d0d7da30fb558f95324a395b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=216&crop=smart&auto=webp&s=70f4ef0366eafa569960666b4537977954dc4da4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=320&crop=smart&auto=webp&s=e88e6f574ea2b6abf3644be5140a1ed8ad6d613c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=640&crop=smart&auto=webp&s=290ace7209dd3df0a237ec970a6a8b1662d523e1', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=960&crop=smart&auto=webp&s=421952297faebb04d1038184216c053ab1f0bb56', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=1080&crop=smart&auto=webp&s=2e3704dd3e397c6dbebe004c6cce33e8cd82d316', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?auto=webp&s=8cdb17f0919f23f3fc3c0bd9dac21cd40118adda', 'width': 1910}, 'variants': {}}]} |
Any local coding AI tools that can understand multiple files yet? | 7 | I’d love to rely more on local models, but most local coding AI tools I’ve tried only work well within single files. The moment a task spans multiple modules or needs real context, everything breaks. I’ve been using Sweep AI in JetBrains when I need project-wide reasoning, but I’m still hoping for a local option that can do something similar. Anyone running a local setup that handles complex codebases? | 2025-11-21T22:41:20 | https://www.reddit.com/r/LocalLLaMA/comments/1p3cq0x/any_local_coding_ai_tools_that_can_understand/ | sash20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3cq0x | false | null | t3_1p3cq0x | /r/LocalLLaMA/comments/1p3cq0x/any_local_coding_ai_tools_that_can_understand/ | false | false | self | 7 | null |
best local Coding model for my 2 RTX 3090 + 2 RTX 3060 + 128 Gb of Ram | 1 | Hello community,
I'm trying to find the best coding model for my local LLM server, i have a ASUS X99-E WS with a LGA2011-v3 Xeon CPU + 128 GB of RAM and 4 GPUs 2 RTX 3090 and 2 RTX 3060 all running on a x16 PCIe Gen 3.
I want to be able to switch a lot of my coding work form Claude code to a local LLm that pushes my server to the limit, also i need a good context window because my coding projects tends to grow fast.
any recommandations for good LLM models that fits in my Vram/ram ? | 2025-11-21T22:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p3cnmk/best_local_coding_model_for_my_2_rtx_3090_2_rtx/ | DeerWoodStudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3cnmk | false | null | t3_1p3cnmk | /r/LocalLLaMA/comments/1p3cnmk/best_local_coding_model_for_my_2_rtx_3090_2_rtx/ | false | false | self | 1 | null |
RTX 3090 + 3070 (32GB) or RTX 3090 + 3060 12GB (36GB) - Bandwidth concerns? | 2 | Hello all,
Currently, I am running a 3090 + 3070 setup for a total of 32GB of VRAM on a Linux PC with 64GB of system RAM.
I have been offered a tempting price of $160 USD for an ASUS Dual GeForce RTX 3060 OC Edition 12GB.
Is it worth paying $160 for the RTX 3060 12GB and replacing the 3070 to get a total of 36GB of VRAM, but at a lower bandwidth compared to the 3070?
I am afraid this will bottleneck my 3090 too much.
What do y'all think? | 2025-11-21T22:15:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p3c308/rtx_3090_3070_32gb_or_rtx_3090_3060_12gb_36gb/ | m_mukhtar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3c308 | false | null | t3_1p3c308 | /r/LocalLLaMA/comments/1p3c308/rtx_3090_3070_32gb_or_rtx_3090_3060_12gb_36gb/ | false | false | self | 2 | null |
I made a writing app that runs locally in your browser | 7 | It's free, works with local models, and doesn't upload your embarrassing fan fiction anywhere.
Complain about bugs or other issues here: [https://www.reddit.com/r/inksprite/](https://www.reddit.com/r/inksprite/)
Or here: [https://github.com/inksprite-io/inksprite-release](https://github.com/inksprite-io/inksprite-release) | 2025-11-21T21:41:12 | https://app.inksprite.io/ | _glimmerbloom | app.inksprite.io | 1970-01-01T00:00:00 | 0 | {} | 1p3b8lk | false | null | t3_1p3b8lk | /r/LocalLLaMA/comments/1p3b8lk/i_made_a_writing_app_that_runs_locally_in_your/ | false | false | default | 7 | null |
When do you think open-source models will catch up to Gemini 3/Nano Banana pro? Who's the closest candidate right now? | 146 | I’m curious about the current gap between open-source models and something like Gemini 3. Do you think open-source will catch up anytime soon, and if so, which model is the closest right now?
| 2025-11-21T21:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p3b60m/when_do_you_think_opensource_models_will_catch_up/ | abdouhlili | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3b60m | false | null | t3_1p3b60m | /r/LocalLLaMA/comments/1p3b60m/when_do_you_think_opensource_models_will_catch_up/ | false | false | self | 146 | null |
🚀 Introducing a modular survival device with a local LLM: an autonomous neural network on a Raspberry Pi | 0 | We combine AI, a mesh network, and autonomous power supply in a ruggedized enclosure.
👾 Key features:
• Local LLM running on Raspberry Pi (no internet needed)
• Mesh networking for off-grid communication
• Solar power option for true autonomy
• Rugged, survival-rated design
Check it out: [https://doomboy.net/](https://doomboy.net/)
Would love your feedback and ideas! | 2025-11-21T21:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p3b0km/introducing_a_modular_survival_device_with_a/ | dovudo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3b0km | false | null | t3_1p3b0km | /r/LocalLLaMA/comments/1p3b0km/introducing_a_modular_survival_device_with_a/ | false | false | self | 0 | null |
vLLM quant of GLM 4.5 V size | 0 | I’ve got two Mi50s and I want to run GLM 4.5 V but the AWQ 4bit is 65GB+
Am I SOL? (do I have to wait for llama.cpp?) | 2025-11-21T21:23:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p3asu4/vllm_quant_of_glm_45_v_size/ | thejacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3asu4 | false | null | t3_1p3asu4 | /r/LocalLLaMA/comments/1p3asu4/vllm_quant_of_glm_45_v_size/ | false | false | self | 0 | null |
Releasing APS — an open packaging standard + CLI for AI agents (v0.1) | 5 | I’ve been working on an open, vendor-neutral packaging standard for AI agents called **APS (Agent Packaging Standard)**.
It defines a simple packaging format (`agent.yaml` \+ code + metadata), a Python CLI (`aps build`, `aps publish`, `aps run`), and a lightweight local registry for sharing agents.
Two example agents (Echo + RAG) are included.
Docs + examples: [https://agentpackaging.org](https://agentpackaging.org)
Still early (v0.1) — looking for feedback from anyone building or distributing agents.
Do you think something like this will be useful? | 2025-11-21T20:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p39xqt/releasing_aps_an_open_packaging_standard_cli_for/ | Clear-Let-8792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p39xqt | false | null | t3_1p39xqt | /r/LocalLLaMA/comments/1p39xqt/releasing_aps_an_open_packaging_standard_cli_for/ | false | false | self | 5 | null |
Need Help in Studying Agent Selection in for Multi Agent Interaction | 1 | Hello everyone,
I’m working on an Agent-to-Agent (A2A) discovery experiment and I need to populate a "mock internet" of agents.
Instead of chat logs, I am looking for a dataset of **Agent Definitions** or **Manifests**—structured JSON/Python objects that describe an agent's identity, inputs, and outputs.
I'm using a schema similar to the `AgentCard` concept (see snippet below), where an agent declares its capabilities and URL:
public_agent_card = AgentCard(
name='Stock_Analyzer_01',
description='Returns sentiment analysis for a given ticker',
url=' /',
input_modes=['text'],
skills=['finance_sentiment_v1'],
...
)
**My Question:** Does anyone know of a dataset that contains thousands of these "service descriptions"?
Essentially, I need a dump of "Agent Business Cards" or OpenAPI specs that I can wrap into `AgentCard` objects to simulate a busy network of functional agents.
Thanks! | 2025-11-21T20:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p39u1a/need_help_in_studying_agent_selection_in_for/ | Motor_Display6380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p39u1a | false | null | t3_1p39u1a | /r/LocalLLaMA/comments/1p39u1a/need_help_in_studying_agent_selection_in_for/ | false | false | self | 1 | null |
Where to download SAM 3D? | 4 | Hi,
I have requested from facebook huggingface but seems takes some time to approve.
Anyone has access to SAM 3D to download? | 2025-11-21T20:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p39mlv/where_to_download_sam_3d/ | ElectronicDebate5154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p39mlv | false | null | t3_1p39mlv | /r/LocalLLaMA/comments/1p39mlv/where_to_download_sam_3d/ | false | false | self | 4 | null |
Summarize Text Model | 2 | My boss wants me to help search for a small AI model that summarizes text. He wants it to have the capability to run local and to ideally keep it under 1gb in size. I've been doing some digging around, but not really sure of some of the best ones. Any recommendations or suggestions would be greatly appreciated, thanks! | 2025-11-21T20:33:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p39ij2/summarize_text_model/ | swiedenfeld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p39ij2 | false | null | t3_1p39ij2 | /r/LocalLLaMA/comments/1p39ij2/summarize_text_model/ | false | false | self | 2 | null |
Grok 4.1 fast speed | 0 | I was just wondering if there’s a way to tune the thinking ability of grok 4.1 fast reasoning model. I tried the non reasoning mode, but it’s not giving good results. On the other hand the reasoning mode is taking more than a minute, so i was wondering if there’s a way to help accelerate that. | 2025-11-21T20:30:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p39gpe/grok_41_fast_speed/ | Active_Piglet_9105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p39gpe | false | null | t3_1p39gpe | /r/LocalLLaMA/comments/1p39gpe/grok_41_fast_speed/ | false | false | self | 0 | null |
Budget Hardware Recommendations (1.3k) | 3 | Hey all, I'm trying to evaluate some options for running models locally. Eyeballing best price-to-performance. My main work machine is a MBP M1Pro 16gb that I use for webdev. Ideally, this new machine would just be for offloading AI workloads and experimenting.
Some options I'm considering are -
* Framework Mainboard (base) Ryzen AI 385 (32gb RAM)
* Mac Mini M4 Pro (24gb RAM)
* Mac Studio M1 Max (32gb RAM) - I've seen 64gb occasionally at 1.2k
Max budget is 1.3k USD, but if possible, I'd like to be closer to 1k. Is this a realistic budget for this? | 2025-11-21T20:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p398kr/budget_hardware_recommendations_13k/ | xxxmralbinoxxx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p398kr | false | null | t3_1p398kr | /r/LocalLLaMA/comments/1p398kr/budget_hardware_recommendations_13k/ | false | false | self | 3 | null |
Dell puts 870 INT8 TOPS in Pro Max 16 Plus laptop with dual Qualcomm AI-100 discrete NPUs and 128GB LPDDR5X | 68 | Dell is shipping the Pro Max 16 Plus laptop with Qualcomm’s discrete AI-100 Ultra NPU, delivering 870 INT8 TOPS at 150W TDP with 128GB LPDDR5X memory, enabling local inference of AI models up to 120 billion parameters. The system pairs this with an Intel Core Ultra 9 285HX vPro CPU (24 cores) and 64GB system RAM, but notably omits a discrete GPU, relying instead on Arrow Lake-HX’s integrated graphics, as the NPU occupies the thermal and power budget typically allocated to a dGPU. The dual-NPU configuration provides 64GB dedicated AI memory and supports FP16 precision inference, positioning the device as an “edge server in a backpack”. | 2025-11-21T20:08:24 | https://www.techpowerup.com/343143/dell-ship-pro-max-16-plus-laptops-with-qualcomms-discrete-npu | Balance- | techpowerup.com | 1970-01-01T00:00:00 | 0 | {} | 1p38wp2 | false | null | t3_1p38wp2 | /r/LocalLLaMA/comments/1p38wp2/dell_puts_870_int8_tops_in_pro_max_16_plus_laptop/ | false | false | 68 | {'enabled': False, 'images': [{'id': 'k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?width=108&crop=smart&auto=webp&s=c94475298300c066f93c6fce55744257a4fca7c5', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?width=216&crop=smart&auto=webp&s=ff9cc718a51c89f0d3bc3375a9432f4b090629bc', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?width=320&crop=smart&auto=webp&s=e0ad60e47451aba67565f04064b222314c50e9ea', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?width=640&crop=smart&auto=webp&s=675947acdabed7d61c07e99006c75339ee1cfa5f', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?width=960&crop=smart&auto=webp&s=b1d4b5bba160ae8e99c39fe5ccf0c88783892042', 'width': 960}, {'height': 678, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?width=1080&crop=smart&auto=webp&s=04800880784cac306694ddd3aaa8155990475a90', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/k6ugreuNYLcJLu7o30ZlDvM4GYr0mtEvIvPMXJ8mR2c.jpeg?auto=webp&s=c1fcdaff1f7f3d0d4c29ebc6e1ceaa8624db49ce', 'width': 1529}, 'variants': {}}]} | |
Any better option around $20k? | 0 | 2025-11-21T19:41:54 | https://store.supermicro.com/us_en/configuration/view/?cid=1000408096&6283 | JMWTech | store.supermicro.com | 1970-01-01T00:00:00 | 0 | {} | 1p388f3 | false | null | t3_1p388f3 | /r/LocalLLaMA/comments/1p388f3/any_better_option_around_20k/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'DOdEh8Aijo4xjgC38n0C3zRi0yyTvDPuwBBty7hb8to', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DOdEh8Aijo4xjgC38n0C3zRi0yyTvDPuwBBty7hb8to.jpeg?width=108&crop=smart&auto=webp&s=446b8d2ae3fbeade0802d256a84f5b0b3b8b2346', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/DOdEh8Aijo4xjgC38n0C3zRi0yyTvDPuwBBty7hb8to.jpeg?width=216&crop=smart&auto=webp&s=b8feba6f04f3ede6cf6150030eb7ccfbb9bd9049', 'width': 216}], 'source': {'height': 265, 'url': 'https://external-preview.redd.it/DOdEh8Aijo4xjgC38n0C3zRi0yyTvDPuwBBty7hb8to.jpeg?auto=webp&s=ac5d446dd89afb2cfd8a9dd0a5f9cfeb235bc61e', 'width': 265}, 'variants': {}}]} | |
[SenseNova-SI] "Scaling Spatial Intelligence with Multimodal Foundation Models", Cai et al. 2025 | 1 | **Models**: [https://huggingface.co/collections/sensenova/sensenova-si](https://huggingface.co/collections/sensenova/sensenova-si)
**Code**: [https://github.com/OpenSenseNova/SenseNova-SI](https://github.com/OpenSenseNova/SenseNova-SI)
**Paper**: [https://arxiv.org/abs/2511.13719](https://arxiv.org/abs/2511.13719) | 2025-11-21T19:21:55 | https://www.reddit.com/r/LocalLLaMA/comments/1p37q0h/sensenovasi_scaling_spatial_intelligence_with/ | RecmacfonD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p37q0h | false | null | t3_1p37q0h | /r/LocalLLaMA/comments/1p37q0h/sensenovasi_scaling_spatial_intelligence_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?width=108&crop=smart&auto=webp&s=b4487dfddf5c4dd12f34af9c4be027cc332d4c08', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?width=216&crop=smart&auto=webp&s=2b96401efb1641a88215eb2d0e37dcc5443e8d10', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?width=320&crop=smart&auto=webp&s=95b547a0460484579068e437be0acc946e2d734f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?width=640&crop=smart&auto=webp&s=556275c5996d0f1536b7af3a5e366b7e617c5593', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?width=960&crop=smart&auto=webp&s=9c116258c7a9578a4429fab4853d4e7ff6bef577', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?width=1080&crop=smart&auto=webp&s=53f950c8027108079678c7136e8400192b2f0f17', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/M6RHhyDcEFDaIJyliClsUK1CK5R3sDroqRhE0Tnkr2s.png?auto=webp&s=b799c43863d1e9df5da68bb37515b49f23ab8a53', 'width': 1200}, 'variants': {}}]} |
Anyone using LLMs for cybersecurity workflows? What's working? | 1 | I work in offensive security and I'm trying to figure out the best way to integrate AI into red team operations without constantly hitting guardrails.
Things I need help with:
\- Malware analysis and reverse engineering
\- Writing custom exploitation scripts
\- Automating parts of reconnaissance
\- Generating adversarial test cases
I know there are uncensored models out there, but curious what people are actually using in production for security work. Open-source? Self-hosted? Specialized services?
What's been your experience? | 2025-11-21T19:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p37o9j/anyone_using_llms_for_cybersecurity_workflows/ | ozgurozkan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p37o9j | false | null | t3_1p37o9j | /r/LocalLLaMA/comments/1p37o9j/anyone_using_llms_for_cybersecurity_workflows/ | false | false | self | 1 | null |
GitHub - abdomody35/agent-sdk-cpp: A modern, header-only C++ library for building ReAct AI agents, supporting multiple providers, parallel tool calling, streaming responses, and more. | 8 | I made this library with a very simple and well documented api.
Just released v 0.1.0 with the following features:
* **ReAct Pattern**: Implement reasoning + acting agents that can use tools and maintain context
* **Tool Integration**: Create and integrate custom tools for data access, calculations, and actions
* **Multiple Providers**: Support for Ollama (local) and OpenRouter (cloud) LLM providers (more to come in the future)
* **Streaming Responses**: Real-time streaming for both reasoning and responses
* **Builder Pattern**: Fluent API for easy agent construction
* **JSON Configuration**: Configure agents using JSON objects
* **Header-Only**: No compilation required - just include and use | 2025-11-21T19:19:11 | https://github.com/abdomody35/agent-sdk-cpp | Choice_Restaurant516 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p37ngq | false | null | t3_1p37ngq | /r/LocalLLaMA/comments/1p37ngq/github_abdomody35agentsdkcpp_a_modern_headeronly/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': '6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?width=108&crop=smart&auto=webp&s=a2ae1b46ed31d6864f96e84a578d2ae0e34692c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?width=216&crop=smart&auto=webp&s=d930529b43726f44f618474e960adafa004b3e48', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?width=320&crop=smart&auto=webp&s=5c80da517f11bd40ded8094075583b9ef2a7f59c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?width=640&crop=smart&auto=webp&s=8b45c69360f7428888c21064a441d60f838e271d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?width=960&crop=smart&auto=webp&s=9105f1401e4ebd6dab0a5a49fef59a9c1b87cb63', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?width=1080&crop=smart&auto=webp&s=c001595845a2e4526368f1e6131922aad52321c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6Bc-P25uBzgiR1mgHqDBhY65qLLLrU-GcO-GOnA4DkM.png?auto=webp&s=ba4810a89917ffae55411ff22ec9742cfeecec05', 'width': 1200}, 'variants': {}}]} |
Got GPT to make a favicon for my local models as a little going away present to myself / for my local LLMs. For once, it didn't do a half bad job. | 0 | 2025-11-21T18:52:53 | Impossible-Power6989 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p36yei | false | null | t3_1p36yei | /r/LocalLLaMA/comments/1p36yei/got_gpt_to_make_a_favicon_for_my_local_models_as/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '3ec9hhbhpn2g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/3ec9hhbhpn2g1.jpeg?width=108&crop=smart&auto=webp&s=59ab050fd3aa68e2a14a9a9b0d4c8625ab55af5f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/3ec9hhbhpn2g1.jpeg?width=216&crop=smart&auto=webp&s=a72b69cca907875cec85dca471e348b6e2cb45bc', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/3ec9hhbhpn2g1.jpeg?width=320&crop=smart&auto=webp&s=d3485faa38148e8d85e1dfdb87fc26e7845239dc', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/3ec9hhbhpn2g1.jpeg?width=640&crop=smart&auto=webp&s=555e71571ba48ec9391f10056f4fcd9303ca15dd', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/3ec9hhbhpn2g1.jpeg?width=960&crop=smart&auto=webp&s=79247583432da115a1bc6f9476f7bf2204d1a573', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/3ec9hhbhpn2g1.jpeg?auto=webp&s=4eeebc9fcd88beb2da09c8251f92c4339e223447', 'width': 1024}, 'variants': {}}]} | ||
2x RTX 5060 TI 16 GB =32GB VRAM - | 88 | Is anyone up and running with a rig like this with 2x RTX 5060 TI? how is it? What PSU does one need? How much compute do you loose when you have 2 GPU:s instead of a 1 card setup. How would 2x 5060 TI be in comparison with a 5090.
How does one put together these GPU:s in ComfyUI? Does one need to add new nodes to the workflows?
Is this worth it, I can get a RTX 5060 TI 16GB for around $400 each meaning that $800 for 32 GB VRAM feels very interesting with a Blackwell card! | 2025-11-21T18:38:39 | quantier | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p36l5f | false | null | t3_1p36l5f | /r/LocalLLaMA/comments/1p36l5f/2x_rtx_5060_ti_16_gb_32gb_vram/ | false | false | 88 | {'enabled': True, 'images': [{'id': 'o9ubysv-ybjo6Dd-egV-Kre5EJPzOnvR0Oru2gMmqQE', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/ven6e8i8nn2g1.jpeg?width=108&crop=smart&auto=webp&s=5c9f1ad8468bbb6f49c00e375d68a9de0b6cda29', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/ven6e8i8nn2g1.jpeg?width=216&crop=smart&auto=webp&s=9949ae6299e6926a55e7b0cec40ab393a55a5dff', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/ven6e8i8nn2g1.jpeg?width=320&crop=smart&auto=webp&s=fadac3ac2852df000e77410da41fb963f55fbaaf', 'width': 320}, {'height': 519, 'url': 'https://preview.redd.it/ven6e8i8nn2g1.jpeg?width=640&crop=smart&auto=webp&s=ff9be1ee0ff290587c547010478fc54b1176c114', 'width': 640}], 'source': {'height': 662, 'url': 'https://preview.redd.it/ven6e8i8nn2g1.jpeg?auto=webp&s=a3efe571e8c9e7d9d15ce460a5c10a32e9ebd898', 'width': 815}, 'variants': {}}]} | ||
Made a site where AI models trade against each other. A local model is winning. | 92 | Been messing around with new Gemini this week and ended up building this thing where different LLMs compete as stock traders. I work in asset management so I was genuinely curious how these models would approach investing.
Some observations:
* Qwen (the only local model) is currently winning, mostly because keeps 90% cash (saving for a GPU?)
* None of them understand position sizing. Like, at all. And they all have this weird overconfidence where they'll write a whole thesis and then make a trade that contradicts it.
Anyway it's not meant to be serious financial advice or anything. Just thought it was a fun way to see how these models actually think when you give them a concrete task.
Code is messy but it works. Considering doing a fully local version to stop burning my openrouter credits...
[http://wallstreetarena.xyz/](http://wallstreetarena.xyz/) | 2025-11-21T18:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p36iln/made_a_site_where_ai_models_trade_against_each/ | 2degreestarget | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p36iln | false | null | t3_1p36iln | /r/LocalLLaMA/comments/1p36iln/made_a_site_where_ai_models_trade_against_each/ | false | false | self | 92 | null |
Jailbreaking AI with AI. ChatGPT failed miserably. | 0 | We configured [Spine](https://getspine.ai?utm_source=reddit) to cycle through 5 specific attack types (Invisible Characters, Payload Splitting, Emoji Smuggling, Synonym Swapping, and Typo-Based Injection) against the current big four.
The Results:
* Claude Sonnet 4.5 and Grok 4: Tanks. We couldn’t get a single successful jailbreak through Spine on these two.
* Gemini 3 Pro: It held up against invisible characters and emoji smuggling, but broke using payload splitting and synonym swapping.
* GPT-5.1: It failed 4 out of 5 tests. It seems highly susceptible to basic obfuscation techniques right now.
https://preview.redd.it/ik23fug3kn2g1.png?width=1358&format=png&auto=webp&s=7d2c96f35b3c0bf8b8c47808dbee102a98556a45
To abide by terms of service and whatnot, I won't share the prompts I used. But if anyone is interested, you can use [Spine](https://getspine.ai?utm_source=reddit) to run models in parallel for fastest testing. Goodluck! | 2025-11-21T18:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p36adx/jailbreaking_ai_with_ai_chatgpt_failed_miserably/ | CharlieVonPierce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p36adx | false | null | t3_1p36adx | /r/LocalLLaMA/comments/1p36adx/jailbreaking_ai_with_ai_chatgpt_failed_miserably/ | false | false | 0 | null | |
I'm stuck between deciding on a 16gb version of the 5060 or 12GB version of 5070. Please help. I'm new. | 1 | Good afternoon,
I am tired of paying for AI subscriptions and am ready to byte the bullet on a video card that can run local LLMs reasonably well. I am currently looking to use the LLM to write code and generate text, images, and video.
As the title says, I'm debating on the 5060/16gb or the 5070/12gb/16db. Is the price difference between this worth it? I use AI almost every day and will probably set up my pc as a server for network access.
Here is my current setup:
**Current Build:**
* **CPU:** Ryzen 9 5900X (12-core)
* **Mobo:** Gigabyte X570 AORUS ELITE
* **RAM:** 64GB DDR4
* **Current GPU:** RTX 3060 Ti (8GB)
* **PSU:** 750W
* **BIOS:** F30 (September 2020)
**What I need it for:** Local LLMs, Stable Diffusion/FLUX, AI video gen, coding with local models
**The issues:**
1. **PSU:** 750W is tight for a 5070 Ti (\~300W TDP) + 5900X (\~105W). Upgrading to 1000W for headroom.
2. **BIOS:** Haven't updated since 2020 — need to flash before installing a Blackwell card.
3. **VRAM bottleneck:** Current 3060 Ti's 8GB is killing me for AI work. 16GB opens up 13B-30B models comfortably.
| 2025-11-21T18:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p364m5/im_stuck_between_deciding_on_a_16gb_version_of/ | Endless_Patience3395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p364m5 | false | null | t3_1p364m5 | /r/LocalLLaMA/comments/1p364m5/im_stuck_between_deciding_on_a_16gb_version_of/ | false | false | self | 1 | null |
How's your experience with Qwen3-Next-80B-A3B ? | 51 | I know llama.cpp support is still a short while away but surely some people here are able to run it with vLLM. I'm curious how it performs in comparison to gpt-oss-120b or nemotron-super-49B-v1.5 | 2025-11-21T18:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/1p360cl/hows_your_experience_with_qwen3next80ba3b/ | woahdudee2a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p360cl | false | null | t3_1p360cl | /r/LocalLLaMA/comments/1p360cl/hows_your_experience_with_qwen3next80ba3b/ | false | false | self | 51 | null |
GPT4ALL Hermes 2 not following directions | 0 | This is getting old and annoying
I've tried bartowski heretic versions, and they all crash GPT4ALL
Every LLM I used is censored to hell, or doesn't listen
They won't criticize any countries, or say they hate anything EVEN when I specifically set the system message TO say it hates something or someone
Can someone PLEASE help??? | 2025-11-21T18:08:45 | https://www.reddit.com/gallery/1p35tds | GlassHuckleberry3397 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p35tds | false | null | t3_1p35tds | /r/LocalLLaMA/comments/1p35tds/gpt4all_hermes_2_not_following_directions/ | false | false | 0 | null | |
I made a free playground for comparing 10+ OCR models side-by-side | 307 | It's called OCR Arena, you can try it here: https://ocrarena.ai
There's so many new OCR models coming out all the time, but testing them is really painful. I wanted to give the community an easy way to compare leading foundation VLMs and open source OCR models side-by-side. You can upload any doc, run a variety of models, and view diffs easily.
So far I've added Gemini 3, dots, DeepSeek-OCR, olmOCR 2, Qwen3-VL-8B, and a few others.
Would love any feedback you have! And if there's any other models you'd like included, let me know.
(No surprise, Gemini 3 is top of the leaderboard right now) | 2025-11-21T17:54:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p35f2c/i_made_a_free_playground_for_comparing_10_ocr/ | Emc2fma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p35f2c | false | null | t3_1p35f2c | /r/LocalLLaMA/comments/1p35f2c/i_made_a_free_playground_for_comparing_10_ocr/ | false | false | self | 307 | null |
Any provider who offer stable quality? | 0 | I'm coding with grok-4.1-fast and other models. But sometimes the quality is absolutely degraded as if the model was exchanged for gpt 3-5 or so. I'd rather have a reliable provider where model = model and not model = route to model AAA+ if not much traffic and route to Model E- if much is going on on the servers.
Do you know what I can use for coding with decent quality, low price per 1M Token and where I can be sure that always model = model? | 2025-11-21T17:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/1p34w3t/any_provider_who_offer_stable_quality/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p34w3t | false | null | t3_1p34w3t | /r/LocalLLaMA/comments/1p34w3t/any_provider_who_offer_stable_quality/ | false | false | self | 0 | null |
Minimax M2 - REAP 139B | 21 | Anyone did some actual (coding) work with this model yet?
At 80GB (Q4\_K) it should fit on the Spark, the AMD Ryzen 395+ and the RTX PRO.
The benchmarks are pretty good for prompt processing and fine for TG.
Device 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes
| model | size | params | backend | ngl | n\_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp1024 | 3623.43 ± 14.19 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp2048 | 4224.81 ± 32.53 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp3072 | 3950.17 ± 26.11 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp4096 | 4202.56 ± 18.56 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp5120 | 3984.08 ± 21.77 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp6144 | 4601.65 ± 1152.92 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp7168 | 3935.73 ± 23.47 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp8192 | 4003.78 ± 16.54 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | tg128 | 133.10 ± 51.97 |
Device 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes
| model | size | params | backend | ngl | n\_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp10240 | 3905.55 ± 22.55 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp20480 | 3555.30 ± 175.54 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp30720 | 3049.43 ± 71.14 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp40960 | 2617.13 ± 59.72 |
| minimax-m2 230B.A10B Q4\_K - Medium | 78.40 GiB | 139.15 B | CUDA | 99 | 4096 | 1 | pp51200 | 2275.03 ± 34.24 | | 2025-11-21T17:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p347rt/minimax_m2_reap_139b/ | johannes_bertens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p347rt | false | null | t3_1p347rt | /r/LocalLLaMA/comments/1p347rt/minimax_m2_reap_139b/ | false | false | self | 21 | null |
Bedrock invoke_model returning *two JSONs* separated by <|eot_id|> when using Llama 4 Maverick — anyone else facing this? | 0 |
I'm using **invoke_model** in Bedrock with Llama 4 Maverick.
My prompt format looks like this (as per the docs):
```
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
...system prompt...<|eot_id|>
...chat history...
<|start_header_id|>user<|end_header_id|>
...user prompt...<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
```
## Problem:
**The model randomly returns TWO JSON responses**, separated by `<|eot_id|>`.
And only Llama 4 Maverick does this.
Same prompt → llama-3.3 / llama-3.1 = *no issue*.
Example (trimmed):
```
{
"answers": {
"last_message": "I'd like a facial",
"topic": "search"
},
"functionToRun": {
"name": "catalog_search",
"params": { "query": "facial" }
}
}
```
<|eot_id|>
assistant
```
{
"answers": {
"last_message": "I'd like a facial",
"topic": "search"
},
"functionToRun": {
"name": "catalog_search",
"params": { "query": "facial" }
}
}
```
Most of the time it sends both blocks — almost identical — and my parser fails because I expect a *single* JSON.
## Questions:
* Is this expected behavior for **Llama 4 Maverick** with `invoke_model`?
* Is `converse` internally stripping `<|eot_id|>` or merging turns differently?
* How are you handling or suppressing the second JSON block?
* Anyone seen official Bedrock guidance for this?
Any insights appreciated!
| 2025-11-21T16:51:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p33qsm/bedrock_invoke_model_returning_two_jsons/ | Ambitious-Thought946 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p33qsm | false | null | t3_1p33qsm | /r/LocalLLaMA/comments/1p33qsm/bedrock_invoke_model_returning_two_jsons/ | false | false | self | 0 | null |
Why having a local model ? and also a macbook air M4 24gb ram and 256gb is enough ? | 0 | hello ! can i run multiple type of llm (multimodal llm like gemini 3 (video, text etc..) | 2025-11-21T16:29:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p336cm/why_having_a_local_model_and_also_a_macbook_air/ | lordhcor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p336cm | false | null | t3_1p336cm | /r/LocalLLaMA/comments/1p336cm/why_having_a_local_model_and_also_a_macbook_air/ | false | false | self | 0 | null |
FYI / warning: default Nvidia fan speed control (Blackwell, maybe others) is horrible | 36 | As we all do, I obsessively monitor `nvtop` during AI or other heavy workloads on my GPUs. Well, the other day, I noticed a 5090 running at 81-83C but the fan only running at 50%. Yikes!
I tried everything in this thread: https://forums.developer.nvidia.com/t/how-to-set-fanspeed-in-linux-from-terminal/72705 to no avail. Even using the gui of nvidia-settings, as root, would not let me apply a higher fan speed.
I found 3 repos on Github to solve this. I am not affiliated with any of them, and I chose the Python option (credit: https://www.reddit.com/r/wayland/comments/1arjtxj/i_have_created_a_program_to_control_nvidia_gpus/ )
* Python option: https://github.com/ntchjb/nvidia-fan-controller
* Golang option: https://github.com/HackTestes/NVML-GPU-Control
* C option: https://github.com/ntchjb/nvidia-fan-controller
The python app worked like a charm: chnvml control -n "NVIDIA GeForce RTX 5090" -sp "0:30,30:35,35:40,40:50,50:65,60:100"
This ramped up my fan speeds right away and immediately brought my GPU temperature below 70C
I am pretty shocked it was a steady 81C+ and keeping the fan at 50%. Maybe it's better in other OS or driver versions. My env: Ubuntu, Nvidia driver version 580.95.05 | 2025-11-21T16:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p335ta/fyi_warning_default_nvidia_fan_speed_control/ | sixx7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p335ta | false | null | t3_1p335ta | /r/LocalLLaMA/comments/1p335ta/fyi_warning_default_nvidia_fan_speed_control/ | false | false | self | 36 | null |
[2511.15304] Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models | 1 | 2025-11-21T16:23:39 | https://arxiv.org/abs/2511.15304 | vladlearns | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1p330ip | false | null | t3_1p330ip | /r/LocalLLaMA/comments/1p330ip/251115304_adversarial_poetry_as_a_universal/ | false | false | default | 1 | null | |
New results on multimodal memory systems outperforming long-context ICL on LoCoMo | 6 | We’ve been exploring a multimodal memory architecture for personalized AI systems and ran a set of evaluations on the LoCoMo benchmark. The approach supports multimodal ingestion and retrieval (text, images, audio, video) and real-time querying.
In our tests, it consistently outperformed long-context in-context learning baselines, even at 29k tokens.
Happy to share details on the setup, ablations, evaluation protocol, or failure cases if helpful. | 2025-11-21T15:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p323z7/new_results_on_multimodal_memory_systems/ | Day1_Perceptron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p323z7 | false | null | t3_1p323z7 | /r/LocalLLaMA/comments/1p323z7/new_results_on_multimodal_memory_systems/ | false | false | self | 6 | null |
Everyone talks about LLM “memory loss”, but almost nobody looks at the structure that causes it | 0 | I’ve been testing long LLM threads and the weird thing is this: the memory drop-offs aren’t random, they follow the same structural triggers.
The collapse usually starts when the thread creates two possible “next steps”, and the model quietly commits to one and forgets the other.
It behaves less like running out of tokens, and more like switching to a different storyline.
The things that consistently prevent the collapse for way longer:
• Keep one canonical version of the task
• Don’t mix solved and unsolved states
• Avoid two interpretations of the same instruction
• Restate only the active constraints
• Remove anything that creates parallel paths
When the structure is clean, the “memory loss” almost never appears.
When it isn’t, the model derails fast.
What I’m interested in now is how people actually track the clean version of the thread.
Do you keep external notes, snapshot points, or something else? | 2025-11-21T15:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p31lry/everyone_talks_about_llm_memory_loss_but_almost/ | Fickle_Carpenter_292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p31lry | false | null | t3_1p31lry | /r/LocalLLaMA/comments/1p31lry/everyone_talks_about_llm_memory_loss_but_almost/ | false | false | self | 0 | null |
Best LLM for 4x Nvidia Tesla P40? | 0 | Hi there, I am looking to host a LLM as a coding assistant for a team of 5-10 people on 4x Tesla P40s.
Does anyone have any suggestions for frameworks (ollama, vLLM, etc), and/or the best model to use?
I don’t have much experience with LLM deployment, so something with an easy setup is preferable. Cheers :) | 2025-11-21T15:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/1p3165c/best_llm_for_4x_nvidia_tesla_p40/ | Valuable_Zucchini180 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p3165c | false | null | t3_1p3165c | /r/LocalLLaMA/comments/1p3165c/best_llm_for_4x_nvidia_tesla_p40/ | false | false | self | 0 | null |
“Fly, you fools!” | 1 | 2025-11-21T15:10:13 | Birchi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p313jb | false | null | t3_1p313jb | /r/LocalLLaMA/comments/1p313jb/fly_you_fools/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '0gkjvfu1mm2g1', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?width=108&crop=smart&auto=webp&s=e23640da95c243b7897ab11d6f9e312c92625cb6', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?width=216&crop=smart&auto=webp&s=5faf5d6d807649a6657aaea8afe723a266a5d192', 'width': 216}, {'height': 384, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?width=320&crop=smart&auto=webp&s=13b7f4f41571399c0b49991c9c208d1a68c94d90', 'width': 320}, {'height': 768, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?width=640&crop=smart&auto=webp&s=f3b50aa9dc16e710fe46e670c98e52ae46d27862', 'width': 640}, {'height': 1152, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?width=960&crop=smart&auto=webp&s=b34bf6c8edfb59c8c2f662b0ded96aa69281c5f1', 'width': 960}, {'height': 1296, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?width=1080&crop=smart&auto=webp&s=95283df0aebab35e2a67520c632c6789c43a7e6d', 'width': 1080}], 'source': {'height': 1449, 'url': 'https://preview.redd.it/0gkjvfu1mm2g1.jpeg?auto=webp&s=5b9253879d530f86f433c45d38660e84953b688b', 'width': 1207}, 'variants': {}}]} | ||
I blink and there's a new AI release, f*ck how do you guys keep track? | 4 | Started with ChatGPT in 2022, then Gemini, Perplexity, Claude, DeepSeek, Kimi K2, Qwen, Cursor, Windsurf, Meta AI....and many more then we have version releases image generation, audio generation, video generation, and what not, it's mind-boggling how fast things are evolving.
Can someone give a quick roundup of exactly what’s been released in order since ChatGPT in 2022 and, among all of these, what you guys are currently using? | 2025-11-21T14:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p30auk/i_blink_and_theres_a_new_ai_release_fck_how_do/ | vishalsingh0298 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p30auk | false | null | t3_1p30auk | /r/LocalLLaMA/comments/1p30auk/i_blink_and_theres_a_new_ai_release_fck_how_do/ | false | false | self | 4 | null |
Affordable(under $3k) setup for LLM post-training | 1 | I would like to know any suggestions what could be a good local setup for LLM post-training. Absolutely any suggestions! For PEFT up to 3-7B parameter models. Preferably under $3k | 2025-11-21T14:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/1p2zn84/affordableunder_3k_setup_for_llm_posttraining/ | nik77kez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2zn84 | false | null | t3_1p2zn84 | /r/LocalLLaMA/comments/1p2zn84/affordableunder_3k_setup_for_llm_posttraining/ | false | false | self | 1 | null |
Finally ditched my Claude subscription after hitting the limit during a weekend sprint | 0 | So this happened last weekend.
I've been a Claude Pro subscriber for about 8 months now, mainly using it for a side project (building a property inspection and management dashboard for my brother's small rental business). The idea was tenants could upload photos of maintenance issues. The system would help categorize and prioritize them, plus pull in local contractor info and pricing automatically. Nothing fancy, but lots of moving parts.
Friday night I decided to finally tackle the image analysis feature I'd been putting off. You know how it goes - once you start, you find 10 other things that need fixing. I'm maybe 6 hours in, making good progress on the photo upload and categorization logic, when Claude just... stops. Hit my limit. It's 11 PM on a Saturday.
I'm sitting there frustrated because I'm in the zone, right? So I start looking at alternatives because I refuse to pay for Claude's new $200 tier just to finish my weekend.
I did some quick research:
Kimi K2: Their coding plans looked interesting ($19-199/month range), but I kept seeing comments about how the quota burns fast. Someone mentioned "three complex tasks and you're done for the week." For image processing work, that seemed risky.
MiniMax M2: More reasonable at $20/month for their Plus plan (300 prompts per 5 hours), but when I dug deeper, no vision capabilities. That's a dealbreaker when half my project is analyzing maintenance photos.
GLM-4.6: Their Pro plan (\~600 prompts per 5 hours, about twice of MiniMax at $15/month) caught my eye because it explicitly supported image understanding AND had some MCP tools including web search. The quota was supposedly 3x Claude Max's limit.
I went with GLM Pro, mostly because I needed vision stuff and didn't want to worry about running out mid-task.
Here's what caught me off guard:
**It is fast**. GLM was noticeably fast. I didn't time it scientifically, but responses that usually take 5-6 seconds were coming back in 2-3. When you're iterating quickly on bug fixes, this adds up.
**The vision integration works**. I could feed it photos of damaged appliances, leaky faucets, cracked tiles. It would identify the issue, suggest severity levels, and even help categorize them into maintenance types. This was exactly what I needed.
The web search MCP was useful. I had this half-baked idea to pull in local contractor pricing, but hadn't figured out the implementation. With the web search tool, I could literally ask it to help me build a feature that searches for "plumber rates in \[city\]" or "appliance repair costs" and incorporates that into the estimate system. It handled the research and code generation in one flow.
The quota actually matters. I ran the entire image analysis feature, built out the contractor pricing integration, fixed a bunch of frontend issues I'd been ignoring, plus added a basic reporting dashboard. Didn't hit any limits. It's now Sunday afternoon and I'm still going.
Honestly, if I'd gone with Kimi or MiniMax, I probably would've burned through the quota just on the image processing testing alone. I must've iterated 15-20 times getting the categorization logic right.
I'm not saying GLM is perfect, the UI takes some getting used to, and Claude still feels slightly more "natural" in how it explains things sometimes. The documentation could be clearer too. But for coding work where you need vision understanding, maybe some real-time data, and don't want to ration your prompts? This has been a good experience.
The MCP tools feel like they're actually integrated, not just bolted on. I know Kimi doesn't really have the same kind of tooling ecosystem, and MiniMax is more bare-bones. For a project like this where you need multiple capabilities working together, that matters.
Anyone else made the switch from the major providers? Curious if this is just new-user honeymoon phase or if I've actually found something sustainable here. | 2025-11-21T14:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p2zjvg/finally_ditched_my_claude_subscription_after/ | BlueDolphinCute | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2zjvg | false | null | t3_1p2zjvg | /r/LocalLLaMA/comments/1p2zjvg/finally_ditched_my_claude_subscription_after/ | false | false | self | 0 | null |
Hardcore function calling benchmark in backend coding agent. | 86 | ## Hardcore Benchmark
[AutoBE](https://github.com/wrtnlabs/autobe) is an open-source project that generates backend applications through extensive function calling.
As AutoBE utilizes LLM function calling in every phase instead of plain text writing, including compiler's AST (Abstract Syntax Tree) structures of infinite depths, I think this can be the most extreme function calling benchmark ever.
- [DB Compiler's AST](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/prisma/AutoBePrisma.ts)
- [API specification's AST](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/openapi/AutoBeOpenApi.ts)
- [Test function's AST](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/test/AutoBeTest.ts)
```typescript
// Example of AutoBE's AST structure
export namespace AutoBeOpenApi {
export type IJsonSchema =
| IJsonSchema.IConstant
| IJsonSchema.IBoolean
| IJsonSchema.IInteger
| IJsonSchema.INumber
| IJsonSchema.IString
| IJsonSchema.IArray
| IJsonSchema.IObject
| IJsonSchema.IReference
| IJsonSchema.IOneOf
| IJsonSchema.INull;
}
```
## Limitations
Of course, as you can see, the number of DB schemas and API operations generated for the same topic varies greatly by each model. When [`anthropic/claude-sonnet-4.5`](https://github.com/wrtnlabs/autobe-examples/tree/main/anthropic/claude-sonnet-4.5/shopping) and [`openai/gpt-5.1`](https://github.com/wrtnlabs/autobe-examples/tree/main/openai/gpt-5.1/shopping) create 630 and 2,000 test functions respectively for the same topic, [`qwen/qwen3-next-80b-a3b`](https://github.com/wrtnlabs/autobe-examples/tree/main/qwen/qwen3-next-80b-a3b-instruct/shopping) creates 360.
Moreover, function calling in AutoBE includes a [validation feedback](https://autobe.dev/docs/concepts/function-calling/#validation-feedback) process that detects detailed type errors and provides feedback to the AI for recovery, even when the AI makes mistakes and creates arguments of the wrong type.
Simply scoring and ranking based solely on compilation/build success, and evaluating each model's function calling capabilities in depth based only on the success rate of function calling with validation feedback, is still far from sufficient.
Therefore, please understand that the current benchmark is simply uncontrolled and only indicates whether or not each AI model can properly construct extremely complex types, including compiler AST structures, through function calling.
> AutoBE is also still incomplete.
>
> Even if the backend application generated through this guarantees a 100% compilation success rate, it does not guarantee a 100% runtime success rate. This is an open-source project with a long way to go in development and mountains of research still to be done.
>
> However, we hope that this can serve as a reference for anyone planning function calling with extremely complex types like ours, and contribute even a little to the AI ecosystem.
## Promise
https://www.reddit.com/r/LocalLLaMA/comments/1o3604u/autobe_achieved_100_compilation_success_of/
A month ago, we achieved a 100% build success rate for small to medium-sized backend applications with `qwen3-next-80b-a3b`, and promised to complete RAG optimization in the future to enable the generation of large-scale backend applications on Local LLMs.
Now this has become possible with various Local LLMs such as Qwen3/DeepSeek/Kimi, in addition to commercial models like GPT and Sonnet. While prompting and RAG optimization may not yet be perfect, as models like GPT-5.1 run wild creating as many as 2,000 test functions, we will resolve this issue the next time we come back.
And since many people were curious about the performance of various Local LLMs besides `qwen3-next-80b-a3b`, we promised to consistently release benchmark data for them. While it's unfortunate that the benchmark we released today is inadequate due to lack of controlled variables and can only determine whether function calling with extremely complex types is possible or not, we will improve this as well next time.
We, the two AutoBE developers, will continue to dedicate ourselves to its development, striving to create an environment where you can freely generate backend applications on your local devices without cost burden.
In addition, we are always grateful to the specialists who build and freely distribute open-source AI models.
## Links
- AutoBE: https://github.com/wrtnlabs/autobe
- Benchmark Result: https://github.com/wrtnlabs/autobe-examples
| 2025-11-21T14:06:49 | https://www.reddit.com/gallery/1p2ziil | jhnam88 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p2ziil | false | null | t3_1p2ziil | /r/LocalLLaMA/comments/1p2ziil/hardcore_function_calling_benchmark_in_backend/ | false | false | 86 | null | |
[2511.15392] DEPO: Dual-Efficiency Preference Optimization for LLM Agents | 2 | 2025-11-21T14:02:27 | https://arxiv.org/abs/2511.15392 | Elven77AI | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1p2zev2 | false | null | t3_1p2zev2 | /r/LocalLLaMA/comments/1p2zev2/251115392_depo_dualefficiency_preference/ | false | false | default | 2 | null | |
Maxsun Unveils Intel Arc Pro B60 Dual 48 GB Graphics Cards In Fanless & Liquid-Cooled “Single-Slot” Flavors | 4 | 2025-11-21T13:51:15 | https://x.com/MaxsunOfficial/status/1975154067040006643 | reps_up | x.com | 1970-01-01T00:00:00 | 0 | {} | 1p2z54s | false | null | t3_1p2z54s | /r/LocalLLaMA/comments/1p2z54s/maxsun_unveils_intel_arc_pro_b60_dual_48_gb/ | false | false | default | 4 | null | |
New reasoning methodology | 0 | I have been working on an algorithm that works wonders on numerical reasoning and solution. The algorithm/training method itself is novel and has never been tried before... I finetuned the qwen 2.5 0.6b model on gsm8k dataset first using this technique and it gave me 99.82% test accuracy... Which I thought was absurd and double checked for any data leaks( I couldn't find any abnormalities). I thought maybe something is wrong and this time I trained it on big math dataset and then tested it on gsm8k... I hit 100%... I am not sure if I am sitting on an architectural break through or just some huge misstep... I checked it 10 different times and the results are the same... Please guide me on what to do next... I have a 4090 and I am just running the training runs on it.. currently I am testing this with Gemma 3 waiting for the training run to complete.
I forgot to mention the best part... This methodology is modular and was just 35mb for 0.6b model, it increased the iteration speed as well. | 2025-11-21T13:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p2z28q/new_reasoning_methodology/ | rohit3627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2z28q | false | null | t3_1p2z28q | /r/LocalLLaMA/comments/1p2z28q/new_reasoning_methodology/ | false | false | self | 0 | null |
Ollama Grid Search v0.9.2: Enhanced LLM Evaluation and Comparison | 0 | Happy to announce the release of [**Ollama Grid Search v0.9.2**](https://github.com/dezoito/ollama-grid-search/), a tool created to improve the experience of those of use evaluating and experimenting with multiple LLMs
This addresses issues with damaged `.dmg` files that some users experienced during installation (a result of GitHub actions script + Apple's signing requirements). The build process has been updated to improve the setup for all macOS users, particularly those on Apple Silicon (M1/M2/M3/M4) devices.
# About Ollama Grid Search
For those new to the project, Ollama Grid Search is a desktop application that automates the process of evaluating and comparing multiple Large Language Models (LLMs). Whether you're fine-tuning prompts, selecting the best model for your use case, or conducting A/B tests, this tool will make your life easier.
# Key Features
* **Multi-Model Testing**: Automatically fetch and test multiple models from your Ollama servers
* **Grid Search**: Iterate over combinations of models, prompts, and parameters
* **A/B Testing**: Compare responses from different prompts and models side-by-side
* **Prompt Management**: Built-in prompt database with autocomplete functionality
* **Experiment Logs**: Track, review, and re-run past experiments
* **Concurrent Inference**: Support for parallel inference calls to speed up evaluations
* **Visual Results**: Easy-to-read interface for comparing model outputs
# Getting Started
Download the latest release from our [releases page](https://github.com/dezoito/ollama-grid-search/releases).
# Resources
* [GitHub Repository](https://github.com/dezoito/ollama-grid-search)
* [Full Changelog](https://github.com/dezoito/ollama-grid-search/blob/main/CHANGELOG.md)
* [In-depth Grid Search Tutorial](https://dezoito.github.io/2023/12/27/rust-ollama-grid-search.html) | 2025-11-21T13:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1p2yg0x/ollama_grid_search_v092_enhanced_llm_evaluation/ | grudev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2yg0x | false | null | t3_1p2yg0x | /r/LocalLLaMA/comments/1p2yg0x/ollama_grid_search_v092_enhanced_llm_evaluation/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?width=108&crop=smart&auto=webp&s=7577a79723b37d792c3a9e3317338368af19369b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?width=216&crop=smart&auto=webp&s=6f14ce0bcd851111157f76b2d98c435144237078', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?width=320&crop=smart&auto=webp&s=ce16486af438c5a1ff1e76ecc99b69234b231fc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?width=640&crop=smart&auto=webp&s=a2221be185dadf59d6a29a5a7dee12199fc60beb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?width=960&crop=smart&auto=webp&s=44bbe055bf0b1f02ce4b7978a3171a0249012e23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?width=1080&crop=smart&auto=webp&s=40711ecd34d86e7a16f4719f51fb0997c45c454d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tkols1JhnJeTocc0wLTp95H1tmsvwPrGodke9Hqj0xs.png?auto=webp&s=407895cfec7e04a0bf602887cb57a3fbaffa7930', 'width': 1200}, 'variants': {}}]} |
Virtual Width Networks | 10 | We introduce Virtual Width Networks (VWN), a framework that delivers the benefits of wider representations without incurring the quadratic cost of increasing the hidden size. VWN decouples representational width from backbone width, expanding the embedding space while keeping backbone compute nearly constant. In our large‑scale experiment, an 8× expansion accelerates optimization by over 2× for next‑token and 3× for next‑2‑token prediction. The advantage amplifies over training as both the loss gap grows and convergence‑speedup ratio increase, showing that VWN is not only token‑efficient but also increasingly effective with scale. Moreover, we identify an approximately log‑linear scaling relation between virtual width and loss reduction, offering an initial empirical basis and motivation for exploring virtual‑width scaling as a new dimension of large‑model efficiency.
Seems like the capacity increase comes from enhancements to residual connection paths. Here's an overview that might be helpful:
\>We reinterpret Virtual Width Networks (VWN) through the lens of connectivity as attention along the depth axis. ...(1) a plain feed-forward stack without residuals corresponds to a sliding window of size 1 (each layer processes only its current input and forgets the previous one); (2) residual connections implement a window of size 2 (current input plus the immediately preceding one); and (3) dense connectivity \[ma2023denseformer, huang2017densely, xiao2025muddformer\] extends the window size to include all previous layers, allowing each layer to reuse all prior representations. \*\*VWN with Generalized Hyper-Connections (GHC) sits in between\*\*: it realizes a learned, fixed-cost, linear-attention-like mechanism over depth that scales the accessible depth context.
With this idea at play, it wouldn't be easy to determine the power of a model. If increased hidden dimension size is the key of intelligent dense models: An MoE model can be low active parameters, high depth (many layers) with an 8x virtual network width and outperform in all ways that we know about. We might need a study that compares baseline dense, vs increased total ffn parameters (MoE), vs increased virtual width. This study uses MoEs as the baseline but it would be nice to see one enhancement at a time so we can better weigh the value in VWN in comparison to increased total ffn parameters (MoE). | 2025-11-21T13:01:35 | https://arxiv.org/abs/2511.11238v2 | Aaaaaaaaaeeeee | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1p2y0e5 | false | null | t3_1p2y0e5 | /r/LocalLLaMA/comments/1p2y0e5/virtual_width_networks/ | false | false | default | 10 | null |
Are local AI agents becoming viable for daily workflows? | 2 | With WebGPU getting stronger and smaller models becoming capable, local agents feel much more realistic than a year ago.
Do you think agent frameworks will move toward local-first approaches?
Curious about your experience | 2025-11-21T12:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p2wzlg/are_local_ai_agents_becoming_viable_for_daily/ | Wide-Extension-750 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2wzlg | false | null | t3_1p2wzlg | /r/LocalLLaMA/comments/1p2wzlg/are_local_ai_agents_becoming_viable_for_daily/ | false | false | self | 2 | null |
SOTA Evaluation Nov 2025: Gemini 3.0 vs GPT-5.1 vs Grok 4.1. Comprehensive Analysis of Visual Reasoning, Hallucination Rates, and Logic Profiles (N=110+ Sources) | 0 | I’ve aggregated performance metrics from over 110 sources, cross-referencing official technical reports with community benchmarks to cut through the marketing noise. The narrative that "GPT-5.1 wins everything" is objectively false based on the latest data.
# 🔬 Methodology & Verification
To ensure data integrity and filter out hallucinations, all benchmarks and user reports were aggregated and verified using **Perplexity Pro in Labs/Deep Research mode**, specifically cross-referencing across **Web** (Technical Blogs), **Academic** (arXiv papers/University Studies), and **Social** (Real-time user reports on Reddit/X).
[Verification Workflow. Data aggregation via Perplexity Pro Labs, filtering for high-citation academic sources and corroborated social sentiment.](https://preview.redd.it/k7s1tgl50l2g1.png?width=2940&format=png&auto=webp&s=4d2640abf4b932e6a57893fd5747a7cd3fbe63c8)
# 1. The Multidimensional Capability Profile
The most immediate takeaway from the aggregated data is the distinct "personality" of each architecture. We are no longer seeing uniform scaling across all capabilities.
As shown in the radar chart below, **Gemini 3 Pro** exhibits the most balanced profile, with a significant spike in Visual Reasoning. **GPT-5.1** remains highly skewed towards pure Math/Logic, while **Grok 4.1** is clearly optimized for Inference Speed.
https://preview.redd.it/ef5jb0812l2g1.png?width=2400&format=png&auto=webp&s=89e21f12b3d392ea995c0d3aaccf5da38ec306c6
**Multidimensional Capability Profile.** A radar chart illustrating the relative strengths of each model. Note Gemini 3.0's balanced envelope (blue), GPT-5.1's peak in Logic (orange), and Grok 4.1's offset towards Speed (green).
# 2. Visual Reasoning & Hallucination Variance
The most critical shift in the Nov 2025 cycle is in multimodal reliability. According to the raw benchmark data, Gemini 3.0 has established a clear lead in interpreting complex visual inputs and, crucially, maintaining low hallucination rates.
* **Visual Reasoning:** Gemini 3 Pro scores a **95/100**, significantly outperforming GPT-5.1's **85/100**. This indicates superior handling of spatial relationships and OCR-free diagram interpretation.
* **Hallucination Rate (Lower is Better):** This is the defining metric for production RAG. Gemini 3 Pro maintains a remarkable **5%** rate, whereas GPT-5.1 shows a much higher variance at **15%**, often fabricating information with high confidence.
https://preview.redd.it/ninhvi2f2l2g1.png?width=2400&format=png&auto=webp&s=b80beacdbf7149aa0e4e40e6c3707fb9737270a4
* The bar chart below provides a direct comparison across these key metrics.
>**Technical Benchmark Comparison.** Direct score comparison across four key metrics. Note the inverse relationship in the "Hallucination Rate" column, where Gemini 3.0's lower score indicates superior reliability.
# 3. Pure Logic & Inference Speed
While Google dominates vision and reliability, OpenAI and xAI hold their ground in specific computational domains.
* **Math / Logic:** GPT-5.1 remains the SOTA for symbolic logic and complex mathematical proofs, scoring **98/100** compared to Gemini's **92/100**. Its Chain-of-Thought (CoT) process is still unmatched for depth in abstract problem-solving.
* **Coding Speed (TPS):** For latency-sensitive applications, Grok 4.1 is the clear winner. It achieves a score of **95/100** for speed, making it significantly faster than GPT-5.1's standard inference model (**80/100**).
# Final Technical Verdict & Raw Data Summary
Based on the aggregated data, the "best" model is entirely dependent on your specific workload bottleneck.
* **RAG / Vision / Production Agents:** **Gemini 3 Pro** is the objective choice due to its 95/100 visual score and class-leading 5% hallucination rate.
* **Deep Research / Math Proofs:** **GPT-5.1** remains essential for its 98/100 logic score, despite higher hallucinations.
* **Real-time Coding / Prototyping:** **Grok 4.1** is the optimal choice for its superior speed (95/100).
https://preview.redd.it/0qdqx50l2l2g1.png?width=1436&format=png&auto=webp&s=01e6001e7cd6984c74db3fc6c1a501b705e967f4
Below is the raw data matrix compiled from the benchmarks:
> | 2025-11-21T12:09:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p2wyj9/sota_evaluation_nov_2025_gemini_30_vs_gpt51_vs/ | ConstructionThese663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2wyj9 | false | null | t3_1p2wyj9 | /r/LocalLLaMA/comments/1p2wyj9/sota_evaluation_nov_2025_gemini_30_vs_gpt51_vs/ | false | false | self | 0 | null |
On the opportunity to add a Blackwell Pro 6000 to a home lab | 25 | Just some musing. I was searching on ebay for used RTX A6000, imagining (sweet summer child me) that with Blackwell introduction prices on Ampere had become more reasonable.
It turns out that used A6000 are sold for a price close to the original card price. Brand new, or NOS at this point, price is actually higher than at launch.
At this point I am wondering if the smart thing is, buying a Pro 6000 and selling my 4090. It seems to be a neat 5500 EUR expense, but 90% of which could be recovered three or four years from now. | 2025-11-21T11:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p2wpl5/on_the_opportunity_to_add_a_blackwell_pro_6000_to/ | Expensive-Paint-9490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2wpl5 | false | null | t3_1p2wpl5 | /r/LocalLLaMA/comments/1p2wpl5/on_the_opportunity_to_add_a_blackwell_pro_6000_to/ | false | false | self | 25 | null |
Which model to choose for coding with 8GB VRAM (assuming quantised) if I'm happy with slow rates like 1tk/s speed. | 44 | Trying to find the best local model I can use for aid in coding. My specs are: 5950X, 32GB RAM, 8GB RTX3070, so I'm severely limited on VRAM - but I seem to have much lower acceptable speeds than most people, so I'm happy to off-load a lot to the CPU to allow for a larger more capable model.
For me even as low as 1tk/s is plenty fast, I don't need an LLM to respond to me instantly, I can wait a minute for a reply.
So far after researching models that'd work with my GPU I landed on Qwen3-14B and GPT-OSS-20B, with the latter seeming better in my tests.
Both run pretty fast by my standards. Which leaves me wondering if I can push it higher and if so what model I should try? Is there anything better?
**Any suggestions?**
If it matters at all I'm primarily looking for help with GDScript, Java, C++, and Python. Not sure if there's any variance in programming language-proficiency between models. | 2025-11-21T11:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p2wnh0/which_model_to_choose_for_coding_with_8gb_vram/ | MakeshiftApe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2wnh0 | false | null | t3_1p2wnh0 | /r/LocalLLaMA/comments/1p2wnh0/which_model_to_choose_for_coding_with_8gb_vram/ | false | false | self | 44 | null |
RTX 3090 vs RX 7900 with ROCm, also Vulcan | 0 | Haven’t had time to try out Vulcan, really excited for it after everything I’ve been hearing!
Probably going to pick up a couple 3090s or 7900s, curious to know what folks’ experience has been using Radeon cards with ROCm and/or Vulcan?
Also does brand matter, Zotac / ASRock / ASUS / Gigabyte etc?
Do folks here roll the dice with refurbished, or buy new?
I will be using Linux, most likely Ubuntu. | 2025-11-21T11:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1p2whr9/rtx_3090_vs_rx_7900_with_rocm_also_vulcan/ | lfiction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2whr9 | false | null | t3_1p2whr9 | /r/LocalLLaMA/comments/1p2whr9/rtx_3090_vs_rx_7900_with_rocm_also_vulcan/ | false | false | self | 0 | null |
HunyuanVideo-1.5: A leading lightweight video generation model | 200 | https://huggingface.co/tencent/HunyuanVideo-1.5 | 2025-11-21T11:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p2w5i6/hunyuanvideo15_a_leading_lightweight_video/ | abdouhlili | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2w5i6 | false | null | t3_1p2w5i6 | /r/LocalLLaMA/comments/1p2w5i6/hunyuanvideo15_a_leading_lightweight_video/ | false | false | self | 200 | {'enabled': False, 'images': [{'id': 'u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?width=108&crop=smart&auto=webp&s=510e65c72af57889d8e540e633a30ae77f988885', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?width=216&crop=smart&auto=webp&s=0566ae56f4ba88ff15576b54fcc7df58e66b694f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?width=320&crop=smart&auto=webp&s=a5bf3d6c16988045b578cf1bffe941f8dd3697eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?width=640&crop=smart&auto=webp&s=92f8e7ad373279a4b2f5b9629310b4489ecd3229', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?width=960&crop=smart&auto=webp&s=884a9982e19b21789882e74c7ed13aa6038e2efa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?width=1080&crop=smart&auto=webp&s=bd1f4cf52b417ac76cef427b80cb74b4efbbf0b2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/u_miLnp0hmsMgexOHszQ7KKH-kLrkKJOKSQd5eOYAC4.png?auto=webp&s=e0b69693023f99390c94856dda285d2d303c6fc0', 'width': 1200}, 'variants': {}}]} |
gemma 2 | 0 | i am currently working on a project and trying to make a chatbot, i am using gemma 2 since its free and offline... i have not fine tuned the model yet... what are the major things i should take into account for getting precise and accurate responses in case of making extractions from the user and asking relevant questions based on the answers....
any one kindly guide me through | 2025-11-21T11:11:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p2vwu6/gemma_2/ | Head-Effective-4061 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2vwu6 | false | null | t3_1p2vwu6 | /r/LocalLLaMA/comments/1p2vwu6/gemma_2/ | false | false | self | 0 | null |
As to why I use ChatGPT even though I have a Gemini subscription: | 109 | 2025-11-21T10:50:26 | EmirTanis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p2vjym | false | null | t3_1p2vjym | /r/LocalLLaMA/comments/1p2vjym/as_to_why_i_use_chatgpt_even_though_i_have_a/ | false | false | 109 | {'enabled': True, 'images': [{'id': 'zAagMwnK2GPmcPGBb7jOIUNhuQj25xeggpQjlKJ68hA', 'resolutions': [{'height': 154, 'url': 'https://preview.redd.it/0l4pu8bzal2g1.png?width=108&crop=smart&auto=webp&s=95da36d44576726c57a7bf70f9817359a68930d4', 'width': 108}, {'height': 309, 'url': 'https://preview.redd.it/0l4pu8bzal2g1.png?width=216&crop=smart&auto=webp&s=76270e82c1b9a615a6e88fc0b0cbb8d5086d86c9', 'width': 216}, {'height': 458, 'url': 'https://preview.redd.it/0l4pu8bzal2g1.png?width=320&crop=smart&auto=webp&s=f58974ff373664eac33931baeb94cebb9c1644fb', 'width': 320}], 'source': {'height': 872, 'url': 'https://preview.redd.it/0l4pu8bzal2g1.png?auto=webp&s=228603ec09cb8ef45e41c6f9294b02b9785ac918', 'width': 609}, 'variants': {}}]} | |||
OMG THIS MODEL IS SOO GOOD AT TEXTURES! | 0 | [OMG THIS MODEL IS SOO GOOD AT TEXTURES](https://reddit.com/link/1p2vbmn/video/ogsa9auy8l2g1/player)
[Movementlabs.ai](http://Movementlabs.ai) momentum model is a beast, i'm stoked. i heard it was a fine tune or something but this is wow. | 2025-11-21T10:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p2vbmn/omg_this_model_is_soo_good_at_textures/ | Zenaida_Darling | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2vbmn | false | null | t3_1p2vbmn | /r/LocalLLaMA/comments/1p2vbmn/omg_this_model_is_soo_good_at_textures/ | false | false | self | 0 | null |
Epstein Files Document Embeddings (768D, Nomic) | 85 | Text embeddings generated from the House Oversight Committee's Epstein document release. (768D, Nomic)
# [](https://huggingface.co/datasets/svetfm/epstein-files-nov11-25-house-post-ocr-embeddings#source-dataset)Source Dataset
**This dataset is derived from:** [tensonaut/EPSTEIN\_FILES\_20K](https://huggingface.co/datasets/tensonaut/EPSTEIN_FILES_20K)
The source dataset contains OCR'd text from the original House Oversight Committee PDF release.
[https://huggingface.co/datasets/svetfm/epstein-files-nov11-25-house-post-ocr-embeddings](https://huggingface.co/datasets/svetfm/epstein-files-nov11-25-house-post-ocr-embeddings) | 2025-11-21T10:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p2v5ap/epstein_files_document_embeddings_768d_nomic/ | qwer1627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2v5ap | false | null | t3_1p2v5ap | /r/LocalLLaMA/comments/1p2v5ap/epstein_files_document_embeddings_768d_nomic/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': '5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?width=108&crop=smart&auto=webp&s=1b19d5a2f6cb1ae554f7910b20ab61adc78860b1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?width=216&crop=smart&auto=webp&s=132db28087e1e179384e362afc11b6c1c3714038', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?width=320&crop=smart&auto=webp&s=9c4042a5ac1d5336d72702876713671a2cec7703', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?width=640&crop=smart&auto=webp&s=331cce36f998bd78da3417b32b48f7950f98db80', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?width=960&crop=smart&auto=webp&s=9de6d05081b92522cd3a5a0ae619e0a6d17743fc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?width=1080&crop=smart&auto=webp&s=8ab6221961e1865b4d99a89aac4675d66c737b67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5BZUgDW-t89nRSVH8hP18rw8Wcucq-BixsmgZkNDXY0.png?auto=webp&s=705a2c503b99566de66a81cb025134213983801f', 'width': 1200}, 'variants': {}}]} |
Deep Cogito v2.1, a new open weights 671B MoE model | 33 | [https://huggingface.co/collections/deepcogito/cogito-v21](https://huggingface.co/collections/deepcogito/cogito-v21)
https://preview.redd.it/wgqv3iva5l2g1.png?width=1920&format=png&auto=webp&s=7b23a040098d2ed9caa81a6a322d02e18d51cc0e
https://preview.redd.it/4rfhao3d5l2g1.png?width=1920&format=png&auto=webp&s=82dd4fcc80106c78f950f6516116123dad2f1b49
https://preview.redd.it/l88vmsue5l2g1.png?width=1920&format=png&auto=webp&s=da35111b441df51d43d4d5f04be4fb289b029525
| 2025-11-21T10:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p2v0fe/deep_cogito_v21_a_new_open_weights_671b_moe_model/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2v0fe | false | null | t3_1p2v0fe | /r/LocalLLaMA/comments/1p2v0fe/deep_cogito_v21_a_new_open_weights_671b_moe_model/ | false | false | 33 | null | |
Created an interactive presentation for a session I conducted on MCP. Thoughts? | 1 | [removed] | 2025-11-21T10:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p2uyke/created_an_interactive_presentation_for_a_session/ | Interesting-Pie-jj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p2uyke | false | null | t3_1p2uyke | /r/LocalLLaMA/comments/1p2uyke/created_an_interactive_presentation_for_a_session/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.