text stringlengths 0 820 |
|---|
active learning [37] (N=48) 3.88 4.43 4.60 |
conventional active search [15] (N=48) 3.70 4.11 4.38 |
VAS (N=48) 5.61 9.26 12.15 |
random search (N=99) 1.55 2.99 4.18 |
greedy classification (N=99) 2.17 3.96 4.84 |
greedy selection [30] (N=99) 2.29 4.21 5.22 |
active learning [37] (N=99) 2.17 3.95 4.82 |
conventional active search [15] (N=99) 1.68 3.10 4.33 |
VAS (N=99) 4.29 6.91 8.98 |
The results are presented in Table 1 for the small car |
class and in Table 2 for the building class. We see sub- |
stantial improvements in performance of the proposed VAS |
approach compared to all baselines, ranging from 15–260% |
improvement relative to the most competitive state-of-the- |
art approach, greedy selection . There are two general con- |
sistent trends. First, as the number of grids Nincreases |
compared to C(corresponding to sets of rows in either ta- |
ble), performance of all methods declines, as the task be- |
comes more challenging. However, the decline in perfor-mance is typically much greater for our baselines than for |
VAS. Second, overall performance improves as Cincreases |
(columns in both tables), and the relative advantage of VAS |
increases, as it is better able to take advantage of the greater |
budget than the baselines. |
Figure 4: Comparison of policies learned using VAS(left) and the greedy |
selection baseline method (right). |
In Figure 5 we visually illustrate VAS search strategy in |
comparison with the greedy selection baseline (the best per- |
forming baseline). The plus signs correspond to success- |
ful queries, to unsuccessful queries, and arrows repre- |
sent query order. This shows that VAS quickly learns to |
take advantage of the visual similarities between grids (af- |
ter the first several failed queries, the rest are successful), |
whereas our most competitive baseline— greedy selection — |
fails to take advantage of such information. During the ini- |
tial search phase, the V AS policy explores different types of |
grids before exploiting grids it believes to have target ob- |
jects. |
Finally, we perform an ablation study to understand the |
added value of including remaining budget Bas an input |
in the VAS policy network. To this end, we modify the |
combined feature representation of size (2N+1)×14×14, |
consisting of input and auxiliary state features, each of size |
N×14×14, and a single channel of size 14×14containing |
the information of remaining search budget, as depicted in |
Figure 3. We only eliminate the channel from the combined |
feature representation that contains the information about |
the number of queries left, resulting in 2N×14×14size |
feature map. The resulting policy network is then trained |
just as the original VAS architecture. |
Table 3: Comparative ANT performance of VAS without remaining |
search budget andVAS using small car as the target class. |
Method C=25 C=50 C=75 |
VAS w/o remaining search budget (N=30) 4.47 7.38 9.62 |
VAS (N=30) 4.61 7.49 9.88 |
VAS w/o remaining search budget (N=48) 4.34 7.31 9.49 |
VAS (N=48) 4.56 7.45 9.63 |
VAS w/o remaining search budget (N=99) 2.63 4.29 5.69 |
VAS (N=99) 2.72 4.42 5.78 |
We compare the performance of the policy without re- |
maining search budget (referred to as VAS without remain- |
ing search budget ) with VAS in Table 3. Across all problem |
sizes and search budgets, we observe a relatively small but |
6 |
consistent improvement ( ∼1–3%) from using the remaining |
search budget Bas an explicit input to the policy network. |
5.2. Results on the DOTA Dataset |
Next, we repeat our experiments on the DOTA dataset. |
We use large vehicle andship as our target classes. In both |
cases, we also report results with non-overlapping pixel |
grids of size 200×200and150×150(N=36andN=64, |
respectively). We again use C∈{25,50,75}. |
Table 4: ANT comparisons for the large vehicle target class on DOTA. |
Method C=25 C=50 C=75 |
random search (N=36) 1.79 3.50 5.10 |
greedy classification (N=36) 2.64 4.07 5.88 |
greedy selection [30] (N=36) 2.82 4.21 5.97 |
active learning [37] (N=36) 2.63 4.06 5.84 |
conventional active search [15] (N=36) 1.92 3.63 5.34 |
VAS (N=36) 4.63 6.79 8.07 |
random search (N=64) 1.48 2.96 3.91 |
greedy classification (N=64) 2.59 3.77 5.48 |
greedy selection [30] (N=64) 2.72 4.10 5.77 |
active learning [37] (N=64) 2.57 3.74 5.47 |
conventional active search [15] (N=64) 1.64 3.15 4.23 |
VAS (N=64) 5.33 8.47 10.51 |
Table 5: ANT comparisons for the shiptarget class on the DOTA dataset. |
Method C=25 C=50 C=75 |
random search (N=36) 1.73 3.07 4.26 |
greedy classification (N=36) 2.04 3.65 4.92 |
greedy selection [30] (N=36) 2.33 3.84 5.01 |
active learning [37] (N=36) 2.01 3.64 4.91 |
conventional active search [15] (N=36) 1.86 3.25 4.40 |
VAS (N=36) 3.31 5.34 6.74 |
random search (N=64) 1.26 2.33 3.14 |
greedy classification (N=64) 1.89 3.06 3.75 |
greedy selection [30] (N=64) 2.07 3.32 4.02 |
active learning [37] (N=64) 1.87 3.05 3.72 |
conventional active search [15] (N=64) 1.41 2.48 3.38 |
VAS (N=64) 3.58 6.38 7.83 |
The results are presented in Tables 4 and 5, and are |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.