text stringlengths 0 820 |
|---|
Even as we adapted them, TTT andFixMatch do not |
fully take advantage of the rich information obtained at de- |
cision time in the VAS context as we proceed through each |
input task: we not only observe the input image x, but |
also observe query results over time during the search. We |
therefore propose two new variants of TTA which are spe- |
cific to the VAS setting: (a) Online TTA and (b) Stepwise |
TTA. InOnline TTA , we update parameters of the policy |
network after each task is completed during decision time, |
which yields for us both the input xand the observations |
oof the search results, which only partially correspond to |
y, since we have only observed the contents of the previ- |
ously queried grid cells. Nevertheless, we can simply use |
this partial information oas a part of the REINFORCE pol-icy gradient update step to update the policy parameters θ. |
InStepwise TTA , we update the policy network parameters, |
even during the execution of a particular task, at decision |
time, once every m<Csteps. The main difference between |
Online andStepwise variations of our TTA approaches is |
consequently the frequency of updates. Note that we can |
readily compose both of these TTA approaches with con- |
ventional TTA methods, such as TTT andFixMatch . |
5. Experiments |
Evaluation Metric We evaluate the proposed approaches |
in terms of the average number of target objects discovered |
(we shorten it to ANT ). |
Baselines We compare the proposed V AS policy learning |
framework with the following baselines: |
1.random search , where each grid is chosen uniformly |
at random among those which haven’t been explored, |
2.greedy classification , in which we train a classifier ψgc |
to predict whether a particular grid has a target object |
and search the grids most likely to contain the target |
until the search budget is exhausted, and |
3.greedy selection , based on the approach by Uzkent and |
Ermon [30] which trains a policy ψgswhich yields a |
probability of zooming into each grid cell j. We select |
grids according to ψgsuntil the budget Cis saturated. |
4.active learning , in which we randomly select the first |
grid to query and then choose C−1grids using a state- |
of-the-art active learning approach by Yoo et al. [37]. |
5.conventional active search , an active search method by |
Jiang et al. [15], using a low-dimensional feature rep- |
resentation for each image grid from the same feature |
extraction network as in our approach. |
Query Costs We consider two ways of generating query |
costs: (i) c(i, j)=1for all i, j, where Cis just the number |
of queries, and (ii) c(i, j)is based on Manhattan distance |
between iandj. Most of the results we present reflect the |
second setting; the results for uniform query costs are qual- |
itatively similar and provided in the Supplement. |
Datasets We evaluate the proposed approach using two |
datasets: xView [17] and DOTA [34]. xView is a satellite |
imagery dataset which consists of large satellite images rep- |
resenting 60 categories, with approximately 3000 pixels in |
each dimensions. We use 67% and33% of the large satel- |
lite images to train and test the policy network respectively. |
DOTA is also a satellite imagery dataset. We re-scale the |
original∼3000×3000px images to 1200 ×1200px. Un- |
less otherwise specified, we use N=36non-overlapping |
pixel grids each of size 200 ×200. |
5.1. Results on the xView Dataset |
We begin by evaluating the proposed approaches on the |
xView dataset, varying search budgets C∈{25,50,75}and |
5 |
number of grid cells N∈{30,48,99}. We consider two tar- |
get classes: small car andbuilding . As the dataset contains |
variable size images, take random crops of 2500×3000 for |
N=30,2400×3200 pixels for N=48, and 2700×3300 |
forN=99, thereby ensuring equal grid cell sizes. |
Table 1: ANT comparisons for the small car target class on xView. |
Method C=25 C=50 C=75 |
random search (N=30) 3.41 3.95 4.52 |
greedy classification (N=30) 3.91 4.60 4.76 |
greedy selection [30] (N=30) 3.90 4.63 4.78 |
active learning [37] (N=30) 3.92 4.58 4.73 |
conventional active search [15] (N=30) 3.61 4.17 4.70 |
VAS (N=30) 4.61 7.49 9.88 |
random search (N=48) 3.20 3.66 4.11 |
greedy classification (N=48) 3.87 4.29 4.52 |
greedy selection [30] (N=48) 3.89 4.42 4.53 |
active learning [37] (N=48) 3.87 4.28 4.51 |
conventional active search [15] (N=48) 3.26 3.74 4.32 |
VAS (N=48) 4.56 7.45 9.63 |
random search (N=99) 1.10 2.15 2.96 |
greedy classification (N=99) 1.72 2.79 3.36 |
greedy selection [30] (N=99) 1.78 2.83 3.41 |
active learning [37] (N=99) 1.69 2.78 3.33 |
conventional active search [15] (N=99) 1.42 2.31 3.10 |
VAS (N=99) 2.72 4.42 5.78 |
Table 2: ANT comparisons for the building target class on xView. |
Method C=25 C=50 C=75 |
random search (N=30) 3.97 4.94 5.39 |
greedy classification (N=30) 4.69 5.27 5.80 |
greedy selection [30] (N=30) 4.84 5.33 5.82 |
active learning [37] (N=30) 4.67 5.24 5.80 |
conventional active search [15] (N=30) 4.15 5.20 5.51 |
VAS (N=30) 5.65 9.31 12.20 |
random search (N=48) 3.47 3.96 4.26 |
greedy classification (N=48) 3.90 4.43 4.61 |
greedy selection [30] (N=48) 3.95 4.51 4.67 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.