text
stringlengths
0
820
The output of the search policy ψis a probability distri-
bution over A, with ψj(x, o, B ;θ)the probability that grid
cellj∈Ais selected by the policy ψ.
In general, ψwill output a positive probability over all
possible grid cells j∈A. However, in our setting there is
no benefit to querying any grid cell j∈Athat has previ-
ously been queried, i.e., for which o(j)≠0. Consequently,
both at training (when the next decision is generated) and
decision time, we restrict consideration only to j∈Awith
o(j)=0, that is, which have yet to be queried, and sim-
ply renormalize the output probabilities of ψ. Formally, we
define ψ′
j(x, o, B ;θ)=0forjwitho(j)≠0, and define
ψ′
j(x, o, B ;θ)=ψj(x, o, B ;θ)
∑k∈A∶o(k)=0ψk(x, o, B ;θ).
Grid cells jare then samples from ψ′
jat each search step
during training. At decision time, on the other hand, we
choose the grid cell jwith the maximum value of ψ′
j.
This approach allows us to simply train the policy networkψwithout concern about feasibility of particular grid cell
choices at decision time. In addition, to ensure that the pol-
icy is robust to search budget uncertainty, we use randomly
generated budgets Cat training time for different task in-
stances. In the case of query costs c(j, k)=1for all grid
cells j, k, each episode has a fixed length C. In general,
episodes have no fixed length, and end whenever we ex-
haust the total cost budget C. The overview of our proposed
VAS framework is depicted in Figure 2.
Next, we detail the proposed policy network architec-
ture, and subsequently describe an adaptation of our ap-
proach when instances at decision time follow a different
distribution from those in the training data D, that is, the
test-time adaptation (TTA) setting.
4.1. Policy Network Architecture
As shown in Figure 3, the policy network ψ(x, o, B ;θ)is
composed of two components: 1) the image feature extrac-
tion component f(x;ϕ)which maps the aerial image xto a
low-dimensional latent feature representation z, and 2) the
grid selection component g(z, o, B ;ζ), which combines the
latent image representation zwith outcome of past search
queries oand remaining budget Bto produce a probability
distribution over grid cells to search in the next time step.
Thus, the joint parameters of ψareθ=(ϕ, ζ).
We use a frozen ResNet-34 [12], pretrained on Ima-
geNet [16], as the feature extraction component f, followed
by a1×1convolution layer. We combine this with the bud-
getBand past query information oas follows. We apply
the tiling operation in order to convert ointo a represen-
tation with the same dimensions as the extracted features
z=f(x), aiding us to effectively combine latent image
feature and auxiliary state feature while preserving the grid
specific spatial and query related information. Similarly, we
apply tiling to the scalar budget Bto transform it to match
the size of zand the tiled version of o. Finally, we concate-
nate the features (z, o, B)along the channels dimension and
pass them through the grid prediction network g. This con-
sists of 1×1convolution to reduce dimensionality, flatten-
ing, a small MLP with ReLU activations, and a final output
(softmax) that represents the current grid probability. This
4
yields the full policy network to be trained end to end via
REINFORCE: ψ(x, o, B ;θ)=g(f(x;ϕ), o, B ;ζ).
4.2. Test-Time Adaptation
A central issue in our model, as in traditional active
search, is that tasks faced at decision time may in some re-
spects be novel, unlike tasks faced previously (e.g., repre-
sented in the dataset D). We view this issue through the
lens of test-time adaptation (TTA) , in which predictions are
made on data that comes from a different distribution from
training data. While a myriad of TTA techniques have been
developed, they have focused almost exclusively on super-
vised learning problems, rather than active search settings
of the kind we study. Nevertheless, two common techniques
can be either directly applied, or adapted, to our setting: 1)
Test-Time Training (TTT) [27] and 2) FixMatch [26].
TTT makes use of a self-supervised objective at both
training and prediction time by adding a self-supervised
head ras a component of the policy model. The asso-
ciated self-supervised loss (which is added during train-
ing) is a quadratic reconstruction loss ∣∣x−r(z;η)∣∣, where
z=f(x;ϕ)is the latent embedding of the input aerial image
xandηthe parameters of r. At decision time, a new task
image xis used to update policy parameters using just the
reconstruction loss before we begin the search. Adaptation
ofTTT to our V AS domain is therefore direct.
The original variant of FixMatch uses pseudo-labels at
decision time, which are predictions on weakly augmented
variants of the input image x(keeping only those which are
highly confident), to update model parameters. In our do-
main, however, we can leverage the fact that we obtain ac-
tual labels whenever we query regions of the image. We
make use of this additional information as follows. When-
ever a query jis successful (i.e., y(j)=1), we construct a
label vector as the one-hot vector with a 1 in the location of
the successful grid cell j. However if y(j)=0, we associate
each queried grid cell with a 0, and assign uniform proba-
bility distribution over all unqueried grids. We then update
model parameters using a cross-entropy loss.