SASP: Saliency-Augmented Static Policy (Phase II)

About this Model

This model is a supplementary artifact for the CTU FIT master's thesis "Information-Driven Visual Navigation for UAVs using Deep Reinforcement Learning" (Vojtěch Bešťák, 2026).

It is trained with information-driven-uav-navigation, a research framework for visual UAV navigation using RL/IL over aerial orthophoto and land-use grid environments.

Key Value
Algorithm PPO
Environment aerial_grid
W&B group exp2.2-aerial-ppo-ablation-sasp_3
Run ID y8w84vhm
Reward weights dist=1.096, view=19.445, time=-0.100, success=311.2
Max steps / episode 500
Frame stack 4
Seed 4
Python (training) 3.11.14
stable-baselines3 2.2.1
PyTorch 2.10.0
Gymnasium 0.29.1
Arena size 1000 m
Success radius 4 cells
Saliency channel yes

Quantitative Performance

The following results represent the mean performance over 500 deterministic validation episodes in continuous environments featuring authentic ČÚZK orthophotos.

Metric Value
Success Rate 99.44%
Optimality Score 0.635
Average Localization Error 51.74 m

Observation and Action Spaces

Key Shape dtype Notes
camera (4, 84, 84, 3) uint8 Aerial orthophoto RGB crop, normalised to [0,1] inside extractor
visited_mask (4, 84, 84) uint8 Cells visited in the current episode crop
saliency (4, 84, 84) uint8 SelaVPR saliency crop
goal_info (24,) float32 Egocentric telemetry and navigation state

Action space: Box([0, -1], [1, 1], (2,), float32)[velocity ∈ [0,1], angular_rate ∈ [-1,1]]

Standalone Inference (no information-driven-uav-navigation package required)

Install the minimal deps (versions match the training environment):

pip install "stable-baselines3==2.2.1" "torch==2.10.0" "gymnasium==0.29.1" huggingface-hub
import importlib.util, sys, numpy as np
from huggingface_hub import hf_hub_download
from stable_baselines3 import PPO

REPO_ID = "bestak/uav-navigation-sasp"

# 1. Download and load the feature extractor (pure PyTorch, no repo import needed)
fe_path = hf_hub_download(REPO_ID, "feature_extractor.py")
spec = importlib.util.spec_from_file_location("_fe", fe_path)
mod  = importlib.util.module_from_spec(spec)
sys.modules["_fe"] = mod
spec.loader.exec_module(mod)

# 1b. Stub out drone_navigation so cloudpickle can resolve ALL saved class references
#     (lr_schedule, policy_kwargs, etc.) without the package being installed.
import types as _types
for _name in ["drone_navigation", "drone_navigation.models",
              "drone_navigation.models.feature_extractor_aerial",
              "drone_navigation.models.feature_extractor_landuse"]:
    sys.modules.setdefault(_name, _types.ModuleType(_name))
sys.modules["drone_navigation.models.feature_extractor_aerial"].AerialFeaturesExtractor = mod.AerialFeaturesExtractor

# 2. Load the model -- inject the extractor class so cloudpickle can resolve it
model = PPO.load(
    hf_hub_download(REPO_ID, "best_model.zip"),
    custom_objects={
        "features_extractor_class": mod.AerialFeaturesExtractor,
    },
    device="cpu",
)

# 3. Run a single forward pass with a dummy observation
obs = {
    "camera":       np.zeros((4, 84, 84, 3), dtype=np.uint8),
    "visited_mask": np.zeros((4, 84, 84),    dtype=np.uint8),
    "goal_info":    np.zeros(24, dtype=np.float32),
    "saliency":     np.zeros((4, 84, 84),    dtype=np.uint8),
}
action, _ = model.predict(obs, deterministic=True)
print("Action:", action)

Note: custom_objects overrides the cloudpickled class reference, which is why the information-driven-uav-navigation package is not required for loading. See inference.py in this repo for the full example including environment rollouts.

Full Inference with the information-driven-uav-navigation Package

pip install git+https://gitlab.ciirc.cvut.cz/bestavoj/information-driven-uav-navigation.git
# also requires map data -- see the repo README for data preparation
from huggingface_hub import hf_hub_download
from stable_baselines3 import PPO
from drone_navigation.config.experiment_config import ExperimentConfig
from drone_navigation.envs.factory import create_env

REPO_ID = "bestak/uav-navigation-sasp"

# When drone_navigation is installed, the extractor class resolves automatically
model = PPO.load(hf_hub_download(REPO_ID, "best_model.zip"), device="cpu")

cfg = ExperimentConfig.from_json(hf_hub_download(REPO_ID, "config.json"))
cfg.n_envs = 1
env = create_env(cfg)

obs, _ = env.reset()
for _ in range(cfg.max_steps):
    action, _ = model.predict(obs, deterministic=True)
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        print("Episode done. Target reached:", info.get("is_target_reached"))
        break

env.close()

See inference.py in this repo for a more complete example with multi-episode evaluation.

Training

git clone https://gitlab.ciirc.cvut.cz/bestavoj/information-driven-uav-navigation.git
cd information-driven-uav-navigation
uv sync
drone-train-rl --env_type aerial_grid ...

Citation

If you use this model, please cite the original repository:

@misc{information-driven-uav-navigation,
  author = {Bestak, Vojtech},
  title  = {Information-Driven Visual Navigation for UAVs using Deep Reinforcement Learning},
  year   = {2026},
  url    = {https://gitlab.ciirc.cvut.cz/bestavoj/information-driven-uav-navigation}
}
Downloads last month
73
Video Preview
loading