text stringlengths 0 820 |
|---|
we start in grid cell j. For the very first query, we can de- |
fine a dummy initial grid cell d, so that cost function c(d, k) |
captures the initial query cost. Let qtdenote a query per- |
formed in step t. Our ultimate goal is to solve the following |
optimization problem: |
max |
{qt}U(x;{qt})≡∑ |
ty(qt) |
s.t.∶∑ |
t≥0c(qt−1, qt)≤C,(1) |
where c(q−1, q0)=c(d, q0). |
In order to succeed, we need to use the labels from previ- |
ously queried cells to decide which cell to query next. This |
is a conventional setup in active search, where an impor- |
tant means to accomplish such learning is by introducing |
a model fto predict a mapping between (in our instance) |
a grid cell and the associated label (whether it contains a |
target object) [10, 15, 13]. However, in many domains of |
policy networkfeature extractiongrid prediction |
query/search top cellgrid state (o)updatereducesamplesearch budget (B)Figure 2: An overview of the VAS framework. |
interest, such as most visual domains of the kind we con- |
sider, the query budget Cand the number of grid cells N |
are very small compared to the dimension of the input x, |
far too small to learn a meaningful prediction f. Instead, |
we suppose that we have a dataset of tasks (aerial images) |
for which we have labeled whether each of the grid cells |
contains the target object. Let this dataset be denoted by |
D={(xi, yi)}, with each xi=(x(1) |
i, x(2) |
i, . . . , x(N) |
i)the |
task image and yi=(y(1) |
i, y(2) |
i, . . . , y(N) |
i)its correspond- |
ing grid cell labels. Then, at decision (or inference) time, |
we observe the task aerial image x, including its partition |
into the grid cells, and choose queries {qt}sequentially to |
maximize U(x;{qt}). |
We consider two variations of the model above. In the |
first, each instance (xi, yi)in the training data D, as well |
as(x, y)at decision time (when yis unobserved before |
queries) are generated i.i.d. from the same distribution. In |
the second variation, while instances (x, y)are still i.i.d. at |
decision time, their distribution can be different from that |
of the training data D. The latter variation falls within the |
broader category of test-time adaptation (TTA) settings, but |
with the special structure pertinent to our model above. |
4. Solution Approach |
Visual active search over the area defined by xand its |
constituent grid cells is a dynamic decision problem. As |
such, we model it as a budget-constrained episodic Markov |
decision process (MDP), where the search budget Cis de- |
fined for each instance xat decision time . In this MDP, |
the actions are simply choices over which grid cell to query |
next; we denote the set of grids by A={1, . . . , N}. Since |
in our model there is never any value to query a grid cell |
more than once, we restrict actions available at each step |
to be only grids that have not yet been queried (in princi- |
ple, this restriction can also be learned). Policy network |
inputs include: 1) the overall input x, which is crucial in |
providing the broad perspective on each search problem, 2) |
outcomes of past search queries o(we detail our representa- |
tion of this presently), and 3) remaining budget B≤C. State |
transition simply updates the remaining budget and adds the |
outcome of the latest search query to state. Finally, an im- |
mediate reward for query a grid cell jisR(x, o, j)=y(j). |
We represent outcomes of search query history oas fol- |
3 |
feature extraction networkgrid prediction networkinput image (x) |
tilegrid state[o(1), o(2),…, o(N)]ResNet 34 (frozen)tileNx14x14Dx14x141x1convflatten1x1conv1x14x14 |
updateremaining query budget (B) |
…step 1grid probabilitiesMLPstep 2step KFigure 3: Our V AS policy network architecture, showing the grid probabilities at three different steps. |
lows. Each element of ocorresponds to a grid cell j, so that |
o=(o(1), . . . , o(N)).o(j)=0ifjhas not been previously |
queried. If grid cell jhas been previously queried, |
o(j)← /leftr⫯g⊸tl⫯ne{1,ify(j)=1 |
−1,ify(j)=0.(2) |
Armed with this MDP problem representation, we next |
describe our proposed deep reinforcement learning ap- |
proach for learning a search policy that makes use of a |
dataset Dof past search tasks. Specifically, we use the RE- |
INFORCE policy gradient algorithm [28] to directly learn |
a search policy ψ(x, o, B ;θ)where θare the parameters of |
the policy that we learn. Specifically, we maximize the fol- |
lowing objective function: |
∇J(θ)=M |
∑ |
i=1Ti |
∑ |
t=11∑t≥0c(qt−1,qt)≤C∇logψθ(ai |
t∣xi, oi |
t, Bi |
t)Ri |
t |
(3) |
Where Mis the number of example search task seen during |
training and Rtis the discounted cumulative reward defined |
asRt=∑T |
k=tγk−tRkwith a discount factor γ∈[0,1]. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.