text
stringlengths
0
820
Method K=12 K=15 K=18
random search (N=36) 0.460 0.498 0.533
greedy classification (N=36) 0.602 0.624 0.641
greedy selection [30] (N=36) 0.618 0.637 0.647
VAS (N=36) 0.736 0.744 0.767
random search (N=64) 0.389 0.405 0.442
greedy classification (N=64) 0.606 0.612 0.618
greedy selection [30] (N=64) 0.612 0.618 0.626
VAS (N=64) 0.724 0.738 0.749
Table 19: ESR comparisons for the ship target class on the
DOTA dataset.
Method K=12 K=15 K=18
random search (N=36) 0.491 0.564 0.590
greedy classification (N=36) 0.602 0.629 0.657
greedy selection [30] (N=36) 0.609 0.638 0.665
VAS (N=36) 0.757 0.764 0.776
random search (N=64) 0.334 0.379 0.417
greedy classification (N=64) 0.524 0.541 0.559
greedy selection [30] (N=64) 0.531 0.552 0.576
VAS (N=64) 0.700 0.712 0.733
The results are presented in Tables 18 and 19, and
are broadly consistent with our observations on the xView
dataset, with VAS outperforming all baselines by ∼16–25%,
with the greatest improvement typically coming on more
difficult tasks (small Kcompared to N).
G. Search Performance Comparison with
Other Policy Learning Algorithm (PPO)
We conduct experiments with other policy learning algo-
rithm, such as PPO. With PPO [23], the idea is to constrain
our policy update with a new objective function called the
clipped surrogate objective function that will constrain the
policy change in a small range [1−ϵ,1+ϵ]. Here, ϵis a
hyperparameter that helps us to define this clip range. In
all our experiment with PPO, we use clip range ϵ=0.2
as provided in the main paper [23]. We keep all other hy-
perparameters including policy architecture fixed. We call
14
the resulting policy VAS-PPO . In table 20, 21 we present
the result of V AS-PPO and compare the performance with
V AS. our experimental finding suggests that PPO doesn’t
yield any extra benefits in spite of having added complexity
overhead due to the clipped surrogate objective.
Table 20: ANT comparisons with different policy learning algo-
rithm for the small car target class on xView.
Method C=25 C=50 C=75
VAS-PPO (N=30) 4.15 6.82 9.16
VAS (N=30) 4.61 7.49 9.88
VAS-PPO (N=48) 4.03 6.87 9.02
VAS (N=48) 4.56 7.45 9.63
Table 21: ANT comparisons with different policy learning algo-
rithm for the large vehicle target class on DOTA.
Method C=25 C=50 C=75
VAS-PPO (N=36) 4.01 6.24 7.56
VAS (N=36) 4.63 6.79 8.07
VAS-PPO (N=64) 4.89 7.93 10.12
VAS (N=64) 5.33 8.47 10.51
H. Sensitivity Analysis of VAS
We further analyze the behavior of VAS when we inter-
vene the outcomes of past search queries oin the following
ways: (i) Regardless of the “true” outcome, we set the query
outcome to be “unsuccessful” at every stage of the search
process and observe the change in exploration behavior of
VAS, as depicted in fig 10, 11, 12. (ii) Following a similar
line, we also enforce the query outcome to be “successful”
at each stage and observe how it impacts in exploration be-
havior of VAS, as depicted in fig 10, 11, 12.
Early V AS steps are similar between strictly positive and
strictly negative feedback scenarios. This is due to the grid
prediction network’s input similarity in early stages of V AS.
The imagery and search budget are constant between the
two, and the grid state vector between the two are mostly
the same (as they are both initialized to all zeros). Follow-
ing from step 7 we see V AS diverge. A pattern that emerges
is that when V AS receives strictly negative feedback, it be-
gins to randomly explore. After every unsuccessful query,
V AS learns that similar areas are unlikely to contain objects
of interest and so it rarely visits similar areas. This is most
clear in figure 12 where we see at step 11 it explores an
area that’s completely water. It then visits a distinctive areathat’s mostly water but with land (and no harbor infrastruc-
ture). In strictly positive feedback scenarios we see V AS
aggressively exploit areas that are similar to ones its already
seen, as those areas have been flagged as having objects of
interest. Consider the bottom row for each of figures 10,
11, and 12. In figure 10, after a burn in phase we see V AS
looking at roadsides starting in step 9. In figure 11, V AS
seeks to capture roads. By step 15, V AS has an elevated
probability for nearly the entire circular road in the upper
left of the image. In figure 12, V AS seeks out areas that
look like harbors. Together these examples demonstrate a
key feature of reinforcement learning: the ability to explore
and exploit. Additionally, they show that V AS is sensitive
to query results and uses the grid state to guide its search.
In fig 13, 14, 15, we provide a similar visualization of V AS
under Manhattan distance based query cost.
I. Efficacy of TTA on Search Tasks involving
Large Number of Grids
We conduct experiments with number of grids Nas
900. We train V AS using small car as target while eval-
uate with building as target class. We report the result
in table 22. We observe a significant improvement (up to