3d_model / docs /ORACLE_UNCERTAINTY_SUMMARY.md
Azan
Clean deployment build (Squashed)
7a87926

Oracle Uncertainty Propagation - Summary

🎯 What Changed

The oracle system was redesigned from binary rejection to continuous uncertainty propagation using collective scoring instead of individual oracle heuristics.

πŸ“Š Key Differences

Old Approach: Binary Rejection

# Individual oracle heuristics
if rotation_error < 2.0 degrees:  # Hard threshold
    arkit_agrees = True
else:
    arkit_agrees = False

# Binary voting
if (arkit_agrees + lidar_agrees) / 2 >= 0.7:
    confidence = 1.0  # Accept
    use_for_training = True
else:
    confidence = 0.0  # Reject
    use_for_training = False  # Discarded

Problems:

  • ❌ Hard thresholds (arbitrary cutoffs)
  • ❌ Binary decisions (all-or-nothing)
  • ❌ No uncertainty propagation
  • ❌ Individual heuristics (each oracle has its own threshold)
  • ❌ Information loss (uncertainty discarded)

New Approach: Continuous Uncertainty Propagation

# Collective scoring with uncertainty models
arkit_uncertainty = compute_uncertainty(rotation_error, arkit_std)
lidar_uncertainty = compute_uncertainty(depth_error, lidar_std)

# Bayesian fusion (inverse variance weighting)
fused_uncertainty = fuse_uncertainties(
    [arkit_uncertainty, lidar_uncertainty],
    weights=[arkit_reliability, lidar_reliability]
)

# Continuous confidence (inverse of normalized uncertainty)
confidence = 1.0 / (1.0 + normalized_uncertainty)  # [0.0-1.0]

# All pixels used, weighted by confidence
use_for_training = True  # Always
loss_weight = confidence  # Continuous weighting

Benefits:

  • βœ… Continuous scores (0.0-1.0, not just 0 or 1)
  • βœ… Uncertainty propagation (covariance estimates)
  • βœ… Collective scoring (all oracles together)
  • βœ… No arbitrary thresholds
  • βœ… Information preserved (uncertainty propagated)

πŸ”„ Confidence Masks Explained

What Are Confidence Masks?

Confidence masks are continuous scores (0.0-1.0) that indicate how much to trust each DA3 prediction based on oracle agreement. They propagate uncertainty rather than making binary decisions.

Structure

confidence_mask = {
    # Continuous confidence scores
    'collective_confidence': (N, H, W) float,  # [0.0-1.0]
    'pose_confidence': (N,) float,  # Frame-level [0.0-1.0]
    'depth_confidence': (N, H, W) float,  # Pixel-level [0.0-1.0]

    # Uncertainty estimates
    'collective_uncertainty': (N, H, W) float,  # Inverse of confidence
    'pose_uncertainty': (N, 6) float,  # 6D pose uncertainty
    'depth_uncertainty': (N, H, W) float,  # Depth std in meters

    # Covariance matrices
    'pose_covariance': (N, 6, 6) float,  # Full pose covariance
    'depth_covariance': (N, H, W) float,  # Depth variance
}

How They're Computed

  1. Oracle Uncertainty Models - Each oracle has base uncertainty (std)
  2. Error-Scaled Uncertainty - Uncertainty increases with error magnitude
  3. Bayesian Fusion - Combine oracles using inverse variance weighting
  4. Confidence from Uncertainty - confidence = 1.0 / (1.0 + normalized_uncertainty)

🎚️ Collective Scoring

Why Collective Scoring?

Instead of individual oracle heuristics, we use collective scoring:

  1. All oracles combined - No single oracle decides
  2. Weighted by reliability - More reliable oracles have more influence
  3. Propagates uncertainty - Uncertainty flows through the system
  4. Continuous scores - No hard cutoffs

Formula

# For each pixel/frame:
# 1. Collect oracle uncertainties
uncertainties = [arkit_unc, ba_unc, lidar_unc, ...]
reliabilities = [0.8, 0.95, 0.98, ...]

# 2. Inverse variance weighting
weights = [r / (u^2 + eps) for r, u in zip(reliabilities, uncertainties)]
total_weight = sum(weights)

# 3. Fused uncertainty (weighted harmonic mean)
fused_unc = sum(w * u for w, u in zip(weights, uncertainties)) / total_weight

# 4. Collective confidence
confidence = 1.0 / (1.0 + normalized_uncertainty)

πŸ“ˆ Uncertainty Propagation

What Gets Propagated

  • Pose uncertainty - 6D (3 rotation + 3 translation)
  • Depth uncertainty - Per-pixel depth std
  • Covariance matrices - Full uncertainty estimates
  • Confidence scores - Continuous [0.0-1.0]

How It's Used

  1. Training - Weighted by confidence**

    loss = confidence * prediction_error
    
  2. Inference - Uncertainty estimates available**

    prediction_with_uncertainty = {
        'value': da3_prediction,
        'uncertainty': propagated_uncertainty,
        'confidence': confidence_score,
    }
    
  3. Downstream tasks - Covariance available**

    # Can use for:
    # - Uncertainty-aware filtering
    # - Probabilistic SLAM
    # - Confidence-based visualization
    

πŸš€ Usage

Basic Usage

from ylff.utils.oracle_uncertainty import OracleUncertaintyPropagator

# Initialize
propagator = OracleUncertaintyPropagator()

# Propagate uncertainty
results = propagator.propagate_uncertainty(
    da3_poses=da3_poses,
    da3_depth=da3_depth,
    intrinsics=intrinsics,
    arkit_poses=arkit_poses,
    ba_poses=ba_poses,
    lidar_depth=lidar_depth,
)

# Get continuous confidence
confidence = results['collective_confidence']  # (N, H, W) [0.0-1.0]
uncertainty = results['collective_uncertainty']  # (N, H, W)

Training with Uncertainty

from ylff.utils.oracle_losses import oracle_uncertainty_ensemble_loss

# Use continuous confidence (not binary rejection)
loss_dict = oracle_uncertainty_ensemble_loss(
    da3_output={'poses': pred_poses, 'depth': pred_depth},
    oracle_targets={'poses': target_poses, 'depth': target_depth},
    uncertainty_results={
        'pose_confidence': pose_conf,  # (N,) [0.0-1.0]
        'depth_confidence': depth_conf,  # (N, H, W) [0.0-1.0]
    },
    use_uncertainty_weighting=True,  # Weight by confidence
)

# All pixels used, weighted by confidence
total_loss = loss_dict['total_loss']

πŸ’‘ Key Insights

1. Continuous vs Binary

  • Binary: Hard cutoff, all-or-nothing, information loss
  • Continuous: Smooth scores, uncertainty preserved, better training

2. Collective vs Individual

  • Individual: Each oracle has its own threshold, hard to combine
  • Collective: All oracles together, weighted fusion, consistent

3. Uncertainty Propagation

  • No propagation: Uncertainty lost, can't use downstream
  • With propagation: Uncertainty flows, covariance available, enables uncertainty-aware training

πŸ“Š Example

# Old: Binary rejection
if confidence < 0.7:
    reject_pixel()  # Discarded
else:
    use_pixel()  # Used with weight=1.0

# New: Continuous uncertainty
confidence = 0.65  # Below old threshold, but still useful
use_pixel(weight=confidence)  # Used with weight=0.65

# Even low confidence pixels contribute (just less)
confidence = 0.3  # Low confidence
use_pixel(weight=confidence)  # Used with weight=0.3

✨ Summary

The new system:

  1. Propagates uncertainty - Continuous confidence, not binary rejection
  2. Uses collective scoring - All oracles together, not individual heuristics
  3. Provides covariance - Full uncertainty estimates for downstream use
  4. Enables uncertainty-aware training - Loss weighted by propagated uncertainty

This is more principled, preserves information, and enables better training! πŸš€