text stringlengths 0 820 |
|---|
empirically show that active learning is inappropriate for |
solving the active search problem. |
However, current active search approaches typically lack |
a pre-search training phase, and are therefore effective in |
relatively low dimensions and for relatively simple model |
classes such as k-nearest-neighbors. In VAS, in contrast, |
our goal is to learn how to search , that is, to learn how to |
best use information obtained from previous search queries |
in choosing the next query. We experimentally demonstrate |
the advantage of VASover conventional active search below. |
Contributions We propose a deep reinforcement learning |
approach to solve the VAS problem. Our key contribution |
is a novel policy architecture which makes use of a natural |
representation of search state, in addition to the task im-age input, which the policy uses to dynamically adapt to the |
task at hand at decision time, without additional training. |
Additionally, we consider a variant of VAS in which the na- |
ture of input tasks at decision time is sufficiently different |
to warrant test-time adaptation, and propose several adap- |
tation approaches that take advantage of the VAS problem |
structure. We extensively evaluate our proposed approaches |
toVAS on two satellite imagery datasets in comparison with |
several baseline, including a state-of-the-art approach for a |
related problem of identifying regions of an image to zoom |
into [30]. Our results show that our approach significantly |
outperforms all baselines. |
In summary, we make the following contributions: |
• We propose visual active search (VAS) , a novel visual |
reasoning model that represents an important class of |
geospatial exploration problems, such as identifying |
poaching activities, illegal trafficking, etc. |
• We propose a deep reinforcement learning approach |
forVAS that learns how to search for target objects in |
a broad geospatial area based on aerial imagery. |
• We propose two new variants of test-time adaptation |
(TTA) variants of VAS: (a)Online TTA and (b) Stepwise |
TTA, as well as an improvement of the FixMatch state- |
of-the-art TTA method [26]. |
• We perform extensive experiments on two publicly |
available satellite imagery datasets, xView and DOTA, |
in a variety of settings, and demonstrate that proposed |
approaches significantly outperform all baselines. |
2. Related Work |
Foveated Processing of Large Images Numerous pa- |
pers [33, 31, 30, 36, 32, 22, 29, 20, 19, 35] have explored the |
use of low-resolution imagery to guide the selection of im- |
age regions to process at high resolution, including a num- |
ber using reinforcement learning to this end. Our setting is |
quite different, as we aim to choose a sequence of regions |
to query, where each query yields the true label , rather than |
a higher resolution image region, and these labels are im- |
portant for both guiding further search, and as an end goal. |
Reinforcement Learning for Visual Navigation Rein- |
forcement learning has also been extensively used for vi- |
sual navigation tasks, such as point and object localiza- |
tion [5, 18, 21, 7]. While similar at the high level, these |
tasks involve learning to decide on a sequence of visual |
navigation steps based on a local view of the environment |
and a kinematic model of motion, and commonly do not in- |
volve search budget constraints. In our case, in contrast, the |
full environment is observed initially (perhaps at low reso- |
lution), and we sequentially decide which regions to query, |
and are not limited to a particular kinematic model. |
Active Search and Related Problem Settings Garnett et |
al. [11] first introduced Active Search (AS) . Unlike Active |
Learning [24], ASaims to discover members of valuable |
2 |
and rare classes rather than on learning an accurate model. |
Garnett et al. [11] demonstrated that for any l>m, al-step |
lookahead policy can be arbitrarily superior than an m-step |
one, showing that a nonmyopic active search approach can |
be significantly better than a myopic one-step lookahead. |
Jiang et al. [15, 14] proposed approaches for efficient non- |
myopic active search, while Jiang et al. [13] introduced con- |
sideration of search cost into the problem. |
We note two crucial differences between our setting and |
the previous works on active search. First, we are the first |
to consider the problem in the context of vision, where the |
problem is high-dimensional, while prior techniques rely on |
a relatively low dimensional feature space. Second, we use |
reinforcement learning as a means to learn a search policy, |
in contrast to prior work on active search which aims to |
design efficient search algorithms. |
3. Model |
At the center of our task is an aerial image xwhich is |
partitioned into Ngrid cells, x=(x(1), x(2), ..., x(N)). We |
can also view xas the disjoint union of these Ngrid cells, |
each of which is a sub-image. A subset (possibly empty) of |
these grid cells contain an instance of the target object. We |
formalize this by associating each grid cell jwith a binary |
label y(j)∈{0,1}, where y(j)=1iff grid cell jcontains |
the target object. Let y=(y(1), y(2), ..., y(N)). |
We do not know ya priori, but can sequentially query to |
identify grid cells that contain the target object. Whenever |
we query a grid cell j, we obtain both the associated label |
y(j), (i.e., whether it contains the target object) andaccrue |
utility if the queried cell actually contains a target object. |
Our ultimate goal is to find as many target objects as possi- |
ble through a sequence of such queries given a total query |
budget constraint C, |
Formally, let c(j, k)be the cost of querying grid cell kif |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.