Oracle Uncertainty Propagation - Summary
π― What Changed
The oracle system was redesigned from binary rejection to continuous uncertainty propagation using collective scoring instead of individual oracle heuristics.
π Key Differences
Old Approach: Binary Rejection
# Individual oracle heuristics
if rotation_error < 2.0 degrees: # Hard threshold
arkit_agrees = True
else:
arkit_agrees = False
# Binary voting
if (arkit_agrees + lidar_agrees) / 2 >= 0.7:
confidence = 1.0 # Accept
use_for_training = True
else:
confidence = 0.0 # Reject
use_for_training = False # Discarded
Problems:
- β Hard thresholds (arbitrary cutoffs)
- β Binary decisions (all-or-nothing)
- β No uncertainty propagation
- β Individual heuristics (each oracle has its own threshold)
- β Information loss (uncertainty discarded)
New Approach: Continuous Uncertainty Propagation
# Collective scoring with uncertainty models
arkit_uncertainty = compute_uncertainty(rotation_error, arkit_std)
lidar_uncertainty = compute_uncertainty(depth_error, lidar_std)
# Bayesian fusion (inverse variance weighting)
fused_uncertainty = fuse_uncertainties(
[arkit_uncertainty, lidar_uncertainty],
weights=[arkit_reliability, lidar_reliability]
)
# Continuous confidence (inverse of normalized uncertainty)
confidence = 1.0 / (1.0 + normalized_uncertainty) # [0.0-1.0]
# All pixels used, weighted by confidence
use_for_training = True # Always
loss_weight = confidence # Continuous weighting
Benefits:
- β Continuous scores (0.0-1.0, not just 0 or 1)
- β Uncertainty propagation (covariance estimates)
- β Collective scoring (all oracles together)
- β No arbitrary thresholds
- β Information preserved (uncertainty propagated)
π Confidence Masks Explained
What Are Confidence Masks?
Confidence masks are continuous scores (0.0-1.0) that indicate how much to trust each DA3 prediction based on oracle agreement. They propagate uncertainty rather than making binary decisions.
Structure
confidence_mask = {
# Continuous confidence scores
'collective_confidence': (N, H, W) float, # [0.0-1.0]
'pose_confidence': (N,) float, # Frame-level [0.0-1.0]
'depth_confidence': (N, H, W) float, # Pixel-level [0.0-1.0]
# Uncertainty estimates
'collective_uncertainty': (N, H, W) float, # Inverse of confidence
'pose_uncertainty': (N, 6) float, # 6D pose uncertainty
'depth_uncertainty': (N, H, W) float, # Depth std in meters
# Covariance matrices
'pose_covariance': (N, 6, 6) float, # Full pose covariance
'depth_covariance': (N, H, W) float, # Depth variance
}
How They're Computed
- Oracle Uncertainty Models - Each oracle has base uncertainty (std)
- Error-Scaled Uncertainty - Uncertainty increases with error magnitude
- Bayesian Fusion - Combine oracles using inverse variance weighting
- Confidence from Uncertainty -
confidence = 1.0 / (1.0 + normalized_uncertainty)
ποΈ Collective Scoring
Why Collective Scoring?
Instead of individual oracle heuristics, we use collective scoring:
- All oracles combined - No single oracle decides
- Weighted by reliability - More reliable oracles have more influence
- Propagates uncertainty - Uncertainty flows through the system
- Continuous scores - No hard cutoffs
Formula
# For each pixel/frame:
# 1. Collect oracle uncertainties
uncertainties = [arkit_unc, ba_unc, lidar_unc, ...]
reliabilities = [0.8, 0.95, 0.98, ...]
# 2. Inverse variance weighting
weights = [r / (u^2 + eps) for r, u in zip(reliabilities, uncertainties)]
total_weight = sum(weights)
# 3. Fused uncertainty (weighted harmonic mean)
fused_unc = sum(w * u for w, u in zip(weights, uncertainties)) / total_weight
# 4. Collective confidence
confidence = 1.0 / (1.0 + normalized_uncertainty)
π Uncertainty Propagation
What Gets Propagated
- Pose uncertainty - 6D (3 rotation + 3 translation)
- Depth uncertainty - Per-pixel depth std
- Covariance matrices - Full uncertainty estimates
- Confidence scores - Continuous [0.0-1.0]
How It's Used
Training - Weighted by confidence**
loss = confidence * prediction_errorInference - Uncertainty estimates available**
prediction_with_uncertainty = { 'value': da3_prediction, 'uncertainty': propagated_uncertainty, 'confidence': confidence_score, }Downstream tasks - Covariance available**
# Can use for: # - Uncertainty-aware filtering # - Probabilistic SLAM # - Confidence-based visualization
π Usage
Basic Usage
from ylff.utils.oracle_uncertainty import OracleUncertaintyPropagator
# Initialize
propagator = OracleUncertaintyPropagator()
# Propagate uncertainty
results = propagator.propagate_uncertainty(
da3_poses=da3_poses,
da3_depth=da3_depth,
intrinsics=intrinsics,
arkit_poses=arkit_poses,
ba_poses=ba_poses,
lidar_depth=lidar_depth,
)
# Get continuous confidence
confidence = results['collective_confidence'] # (N, H, W) [0.0-1.0]
uncertainty = results['collective_uncertainty'] # (N, H, W)
Training with Uncertainty
from ylff.utils.oracle_losses import oracle_uncertainty_ensemble_loss
# Use continuous confidence (not binary rejection)
loss_dict = oracle_uncertainty_ensemble_loss(
da3_output={'poses': pred_poses, 'depth': pred_depth},
oracle_targets={'poses': target_poses, 'depth': target_depth},
uncertainty_results={
'pose_confidence': pose_conf, # (N,) [0.0-1.0]
'depth_confidence': depth_conf, # (N, H, W) [0.0-1.0]
},
use_uncertainty_weighting=True, # Weight by confidence
)
# All pixels used, weighted by confidence
total_loss = loss_dict['total_loss']
π‘ Key Insights
1. Continuous vs Binary
- Binary: Hard cutoff, all-or-nothing, information loss
- Continuous: Smooth scores, uncertainty preserved, better training
2. Collective vs Individual
- Individual: Each oracle has its own threshold, hard to combine
- Collective: All oracles together, weighted fusion, consistent
3. Uncertainty Propagation
- No propagation: Uncertainty lost, can't use downstream
- With propagation: Uncertainty flows, covariance available, enables uncertainty-aware training
π Example
# Old: Binary rejection
if confidence < 0.7:
reject_pixel() # Discarded
else:
use_pixel() # Used with weight=1.0
# New: Continuous uncertainty
confidence = 0.65 # Below old threshold, but still useful
use_pixel(weight=confidence) # Used with weight=0.65
# Even low confidence pixels contribute (just less)
confidence = 0.3 # Low confidence
use_pixel(weight=confidence) # Used with weight=0.3
β¨ Summary
The new system:
- Propagates uncertainty - Continuous confidence, not binary rejection
- Uses collective scoring - All oracles together, not individual heuristics
- Provides covariance - Full uncertainty estimates for downstream use
- Enables uncertainty-aware training - Loss weighted by propagated uncertainty
This is more principled, preserves information, and enables better training! π