text stringlengths 0 820 |
|---|
VAS (N=64) 5.04 6.50 7.38 |
Algorithm 1 The VAS algorithm. |
Require: A search task instance (xi, yi); budget constraint C; |
search policy ψ(xi, o, B)with parameters θ; |
1:Initialize o0=[0...0];B0=C; step t=0 |
2:while Bt>0do |
3: ˜y=ψ(xi, ot, Bt) |
4: j← /leftr⫯g⊸tl⫯neSamplej∈{Unexplored Grids }[˜y] |
5: Query grid cell with index jand observe true label y(j). |
6: Obtain reward Rt=y(j). |
7: Update ottoot+1witho(j)=2y(j)−1. |
8: Update BttoBt+1withBt+1=Bt−c(k, j)(assuming |
we query k’th grid at (t−1)). |
9: Collect transition tuple ( τ) at step t, i.e., τt=(state |
=(xi, ot, Bt), action = j, reward = Rt, next state = |
(xi, ot+1, Bt+1) ). |
10: t← /leftr⫯g⊸tl⫯net+1 |
11:end while |
12:Update the search policy parameters, i.e., θusing REIN- |
FORCE objective as in 3 based on the collected transition |
tuples ( τt) throughout the episode. |
13:Return updated search policy parameters, i.e., θ. |
batch size of 16, number of training epochs 200, and the |
Adam optimizer to train the policy network in all results. |
We add a self-supervised head rto the V AS policy archi- |
tecture for TTT. The architecture of self-supervised head is |
detailed in table 13. We applied a series of 4 up-convolution |
layers with intermediate ReLU activations followed by a |
tanh activation layer on the semantic features extracted us- |
ing ResNet34. For FixMatch, our V AS architecture remains |
unchanged, and we apply only spatially invariant augmen- |
tations (e.g auto contrast, brightness, color, and contrast) |
and ignore all translation augmentations (translate X, trans- |
late Y , ShearX etc.) to obtain the augmented version of the |
input image. We update the model parameters after every |
query step using a cross-entropy loss between a pseudo- |
target and a predicted vector as described below. We define |
the pseudo-target vector as follows. Whenever a query jissuccessful ( yj=1), we construct a label vector as the one- |
hot vector with a 1 in the jth grid cell. However if yj=0, we |
associate each queried grid cell with a 0, and assign a uni- |
form probability distribution over all unqueried grids. Pre- |
diction vector is the “logit” representation obtained from the |
V AS policy. We used the Adam optimizer with a learning |
rate of 10−4for both TTT and FixMatch. |
Table 12: V AS Policy Architecture |
Layers Configuration o/p Feature Map size |
Input RGB Image 3 ×2500×3000 |
Feat. Extraction ResNet-34 512 ×14×14 |
Conv1 c:N k: 1×1 N×14×14 |
Tile1 Grid State ( o) N×14×14 |
Tile2 Query Left ( B) 1×14×14 |
Channel Concat Conv1,Tile1,Tile2 (2N+1)×14×14 |
Conv2 c:3 k: 1×1 3×14×14 |
Flattened Conv2 588 |
FC1+ReLU ( 588−>2N) 2N |
FC2 ( 2N−>N) N |
Table 13: Self-supervised head Architecture |
Layers Configuration |
Input: Latent Feature 36×14×14 |
1st Up-conv layer in-channel:36;out-channel:36;k: 3×3;stride:2;padd:0 |
Activation Layer ReLU |
2nd Up-conv layer in-channel:36;out-channel:24;k: 3×3;stride:2;padd:1 |
Activation Layer ReLU |
3rd Up-conv layer in-channel:24;out-channel:12;k: 2×2;stride:4;padd:1 |
Activation Layer ReLU |
4th Up-conv layer in-channel:12; out-channel:3; k: 2×2; stride:2; padd:0 |
Normalization layer tanh |
D. Search Performance Comparison with Dif- |
ferent Feature Extractor Module |
In this section, we compare the performance of V AS with |
different feature extraction module. We use state-of-the-art |
feature extraction modules, such as ViT [6] and DINO [4] |
for comparison. The Vision Transformer (ViT) [6] is a |
transformer encoder model (BERT-like) pretrained on a |
large collection of images in a self-supervised fashion, |
namely ImageNet-21k (a collection of 14 million images), |
at a resolution of 224×224pixels, with patch resolution |
of16×16. Note that, we use off the shelf pretrained ViT |
model provided by huggingface (google/vit-base-patch16- |
224-in21k). We call the resulting policy VAS-ViT . Similar to |
12 |
ViT, DINO [4] is also based on transformer encoder model. |
Images are presented to the DINO model as a sequence of |
fixed-size patches (resolution 8x8), which are linearly em- |
bedded. For our experiment, we use DINO pretrained on |
ImageNet-1k, at a resolution of 224x224 pixels. For our ex- |
periments, we use pretrained DINO model provided by hug- |
gingface (facebook/dino-vits8). We call the resulting policy |
as V AS-DINO. In table 14, 15 we report the performance |
of V AS-ViT and V AS-DINO and compare them with V AS. |
Table 14: ANT comparisons with different feature extraction |
module for the small car target class on xView. |
Method C=25 C=50 C=75 |
VAS-DINO (N=30) 4.56 7.41 9.83 |
VAS-ViT (N=30) 4.64 7.47 9.86 |
VAS (N=30) 4.61 7.49 9.88 |
VAS-DINO (N=48) 4.52 7.41 9.59 |
VAS-ViT (N=48) 4.56 7.44 9.68 |
VAS (N=48) 4.56 7.45 9.63 |
Table 15: ANT comparisons with different feature extraction |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.