Oracle Uncertainty Propagation: Continuous Confidence and Covariance
π― Overview
Instead of binary rejection (above/below threshold), the system now propagates continuous uncertainty from all oracle sources using collective scoring (Bayesian fusion). This provides:
- β Continuous confidence scores - Not just pass/fail
- β Uncertainty propagation - Covariance estimates for all predictions
- β Collective scoring - All oracles combined, not individual heuristics
- β Uncertainty-aware training - Loss weighted by propagated uncertainty
π Key Difference: Binary vs Continuous
Old Approach (Binary Rejection)
# Individual oracle heuristics
if rotation_error < 2.0 degrees: # ARKit threshold
arkit_agrees = True
else:
arkit_agrees = False
if depth_error < 0.1 meters: # LiDAR threshold
lidar_agrees = True
else:
lidar_agrees = False
# Binary voting
if (arkit_agrees + lidar_agrees) / 2 >= 0.7:
confidence = 1.0 # Accept
else:
confidence = 0.0 # Reject
Problems:
- β Hard thresholds (arbitrary cutoffs)
- β Binary decisions (no gradation)
- β No uncertainty propagation
- β Individual heuristics (not collective)
New Approach (Uncertainty Propagation)
# Collective scoring with uncertainty
arkit_uncertainty = compute_uncertainty(rotation_error, arkit_std)
lidar_uncertainty = compute_uncertainty(depth_error, lidar_std)
# Bayesian fusion (inverse variance weighting)
fused_uncertainty = fuse_uncertainties(
[arkit_uncertainty, lidar_uncertainty],
weights=[arkit_reliability, lidar_reliability]
)
# Continuous confidence (inverse of normalized uncertainty)
confidence = 1.0 / (1.0 + normalized_uncertainty)
Benefits:
- β Continuous scores (0.0-1.0)
- β Uncertainty propagation (covariance estimates)
- β Collective scoring (all oracles together)
- β No arbitrary thresholds
π Confidence Masks Explained
What Are Confidence Masks?
Confidence masks are per-pixel (or per-frame) scores that indicate how much to trust each DA3 prediction based on oracle agreement.
Current Implementation
confidence_mask = {
'collective_confidence': (N, H, W) float, # [0.0-1.0] - Combined confidence
'collective_uncertainty': (N, H, W) float, # Uncertainty (inverse of confidence)
'pose_confidence': (N,) float, # Frame-level pose confidence
'depth_confidence': (N, H, W) float, # Pixel-level depth confidence
'pose_uncertainty': (N, 6) float, # 6D pose uncertainty (3 rot + 3 trans)
'depth_uncertainty': (N, H, W) float, # Depth uncertainty (std) in meters
'pose_covariance': (N, 6, 6) float, # Full pose covariance matrices
'depth_covariance': (N, H, W) float, # Depth variance (uncertainty^2)
}
How Confidence is Computed
1. Oracle Uncertainty Models
Each oracle has a base uncertainty (standard deviation):
arkit_pose_uncertainty = (0.017 rad, 0.05 m) # ~1Β° rotation, 5cm translation
ba_pose_uncertainty = (0.009 rad, 0.02 m) # ~0.5Β° rotation, 2cm translation
lidar_depth_uncertainty = 0.02 m # 2cm depth uncertainty
2. Error-Scaled Uncertainty
Uncertainty increases with error magnitude:
# Base uncertainty
base_uncertainty = oracle_std
# Scale by error magnitude
error_scale = actual_error / oracle_std
scaled_uncertainty = base_uncertainty * (1.0 + error_scale)
3. Bayesian Fusion
Combine multiple oracles using inverse variance weighting:
# Weight by inverse variance (more certain = higher weight)
weight_i = reliability_i / (uncertainty_i^2 + epsilon)
# Fused uncertainty (weighted harmonic mean)
fused_uncertainty = sum(weight_i * uncertainty_i) / sum(weight_i)
4. Confidence from Uncertainty
Convert uncertainty to confidence:
# Normalize uncertainty
normalized_uncertainty = uncertainty / typical_uncertainty
# Confidence: inverse of normalized uncertainty
confidence = 1.0 / (1.0 + normalized_uncertainty)
ποΈ Collective Scoring
Why Collective Scoring?
Instead of individual oracle heuristics, we use collective scoring that:
- Combines all oracles - No single oracle decides
- Weighted by reliability - More reliable oracles have more influence
- Propagates uncertainty - Uncertainty flows through the system
- Continuous scores - No hard cutoffs
Oracle Reliability Weights
oracle_reliability = {
'ba_pose': 0.95, # Highest (most robust)
'lidar_depth': 0.98, # Highest (direct measurement)
'geometric_consistency': 0.85, # High (enforces geometry)
'arkit_pose': 0.8, # High (when tracking is good)
'imu': 0.7, # Medium (indirect)
}
Fusion Formula
# For each pixel/frame:
# 1. Collect oracle uncertainties
uncertainties = [arkit_unc, ba_unc, lidar_unc, ...]
reliabilities = [0.8, 0.95, 0.98, ...]
# 2. Inverse variance weighting
weights = [r / (u^2 + eps) for r, u in zip(reliabilities, uncertainties)]
total_weight = sum(weights)
# 3. Fused uncertainty
fused_unc = sum(w * u for w, u in zip(weights, uncertainties)) / total_weight
# 4. Collective confidence
confidence = 1.0 / (1.0 + normalized_uncertainty)
π Uncertainty Propagation
Pose Uncertainty
6D Pose Uncertainty:
- 3 rotation components (roll, pitch, yaw)
- 3 translation components (x, y, z)
pose_uncertainty = (N, 6) # [rot_x, rot_y, rot_z, trans_x, trans_y, trans_z]
pose_covariance = (N, 6, 6) # Full covariance matrix
Propagation:
- Error magnitude β scaled uncertainty
- Multiple oracles β fused uncertainty
- Uncertainty β confidence score
Depth Uncertainty
Per-Pixel Depth Uncertainty:
- Continuous uncertainty in meters (std)
- Covariance (variance = uncertainty^2)
depth_uncertainty = (N, H, W) # Depth std in meters
depth_covariance = (N, H, W) # Depth variance
Propagation:
- LiDAR errors β depth uncertainty
- Geometric consistency β depth uncertainty
- Fused β collective depth uncertainty
Combined Uncertainty
Collective Confidence:
- Combines pose + depth + IMU uncertainties
- Geometric mean for independence assumption
- Continuous [0.0-1.0] confidence scores
collective_confidence = sqrt(pose_conf * depth_conf * imu_conf)
π Usage
Basic Usage
from ylff.utils.oracle_uncertainty import OracleUncertaintyPropagator
# Initialize
propagator = OracleUncertaintyPropagator(
arkit_pose_uncertainty=(0.017, 0.05), # (rot_rad, trans_m)
ba_pose_uncertainty=(0.009, 0.02),
lidar_depth_uncertainty=0.02, # meters
)
# Propagate uncertainty
results = propagator.propagate_uncertainty(
da3_poses=da3_poses, # (N, 3, 4) w2c
da3_depth=da3_depth, # (N, H, W)
intrinsics=intrinsics, # (N, 3, 3)
arkit_poses=arkit_poses, # (N, 4, 4) c2w
ba_poses=ba_poses, # (N, 3, 4) w2c
lidar_depth=lidar_depth, # (N, H, W)
)
# Get confidence masks
confidence = results['collective_confidence'] # (N, H, W) [0.0-1.0]
uncertainty = results['collective_uncertainty'] # (N, H, W)
pose_covariance = results['pose_covariance'] # (N, 6, 6)
depth_covariance = results['depth_covariance'] # (N, H, W)
Training with Uncertainty
# Use confidence for weighted loss (not binary rejection)
loss = uncertainty_weighted_loss(
predictions=da3_predictions,
targets=oracle_targets,
confidence=confidence, # Continuous [0.0-1.0]
uncertainty=uncertainty, # For covariance-aware training
)
# Or use covariance directly
loss = covariance_aware_loss(
predictions=da3_predictions,
targets=oracle_targets,
covariance=depth_covariance, # (N, H, W)
)
π‘ Key Insights
1. Continuous vs Binary
Binary rejection:
- β Hard cutoff (arbitrary threshold)
- β No gradation (all-or-nothing)
- β Loses information (uncertainty discarded)
Continuous uncertainty:
- β Smooth confidence scores
- β Propagates uncertainty
- β Preserves information
2. Collective vs Individual
Individual heuristics:
- β Each oracle has its own threshold
- β Hard to combine
- β Inconsistent decisions
Collective scoring:
- β All oracles combined
- β Weighted by reliability
- β Consistent fusion
3. Uncertainty Propagation
No propagation:
- β Uncertainty lost
- β Can't use for downstream tasks
- β No covariance estimates
With propagation:
- β Uncertainty flows through system
- β Covariance estimates available
- β Can use for uncertainty-aware training
π Example Output
results = {
'collective_confidence': array([[0.95, 0.87, 0.92, ...], # High confidence pixels
[0.72, 0.65, 0.78, ...], # Medium confidence
[0.45, 0.38, 0.52, ...]]), # Low confidence
'collective_uncertainty': array([[0.05, 0.15, 0.09, ...], # Low uncertainty
[0.39, 0.54, 0.28, ...], # Medium uncertainty
[1.22, 1.63, 0.92, ...]]), # High uncertainty
'pose_confidence': array([0.92, 0.85, 0.78, ...]), # Frame-level
'depth_confidence': array([...]), # Pixel-level
'pose_covariance': array([...]), # (N, 6, 6) full covariance
'depth_covariance': array([...]), # (N, H, W) variance
}
π Summary
The new system:
- Propagates uncertainty - Continuous confidence scores, not binary rejection
- Uses collective scoring - All oracles combined, not individual heuristics
- Provides covariance - Full uncertainty estimates for downstream use
- Enables uncertainty-aware training - Loss weighted by propagated uncertainty
This is more principled, preserves information, and enables better training! π