text stringlengths 0 820 |
|---|
ieee Geoscience and remote sensin G ma Gazine june 201652 |
user, by the labels provided, will disclose the shifted areas |
where the next iterations will focus. However, depending |
on the degree of transformation between the domains, one |
can use more sophisticated strategies that take into account measures of the deformation between the domains. In this |
respect, the following problems have driven DA-related re - |
search in AL: |
◗When it is expected that new classes will appear in |
the target domain, AL can be used to highlight the |
areas of the feature space where these classes could |
be. In [ 19], by using the reasoning of sample selection |
bias, the feature space in the target domain is screened |
using clustering, and dense clusters with no labeled |
samples are presented to the user, who can then pro - |
vide labels if new classes are present. In [ 72], the |
detection of new classes is set as a change-detection problem where an uncertainty of changes is assessed with an information theoretic criterion. Image time |
series are analyzed in [ 21]. |
◗When significant differences between source and target |
domains are expected (i.e., when the sample selection |
bias assumption does not hold), the presence of labeled |
source samples, although beneficial at the beginning of the process, can be harmful for the classification of the |
target domain [ 27], [28 ] (see the examples in the “Trans - |
fer Learning and Domain Adaptation” section and |
Figure 3). If the class distributions in the target domain |
overlap with those in the source domain, relying on the |
labels from the source will lead to classifier errors. Ac - |
cordingly, the approaches in [ 27], [28], and [67 ] consider |
reweighting of the samples in the training set enriched by AL. When samples from the source domain become less relevant or misleading for the correct classification of the target domain, they are downweighted in the adapted classifier or completely removed. Accordingly, |
the classifier specializes to the target domain through |
the inclusion of target samples and gradually forgets the initial source domain. |
◗When the areas to be processed become very large, specific solutions must be designed to avoid too many |
iterations of the AL process. In this respect, solutions |
based on the selection of clusters [ 73], compressed sens - |
ing [ 74], or geographically distributed search strategies |
[75]–[78 ] have been considered. |
In the following, we focus on one example related to the |
second point above, i.e., the reweighting of source samples. |
This example is adapted from [ 67]. We study the feasibil - |
ity of the migration of a model optimized for land-cover |
mapping in a geographical area to another spatially dis - |
joint region. To do so, we consider the well-known Ken - |
nedy Space Center (KSC) hyperspectral image acquired by the airborne visible/infrared imaging spectrometer |
[Figure 9(a) and (b)] and try to adapt the model learned |
therein to be accurate in a spatially disjoint section of the same flightline [Figure 9(c) and (d)]. We consider only the |
ten classes present in all images. The starting model is |
learned using a training set composed of 50 labeled pix - |
els per class and is then enriched by new samples that are |
either added randomly or using the breaking ties AL strat - |
egy [ 79]. The classifier is an SVM, either standard (when no |
other mention is done) or adaptive using the TrAdaBoost model, which is a DA method based on the reweighting |
of the SVM sample weights after the inclusion of the new |
labeled points from the target domain [ 66]. |
When using the source SVM without adaptation, we |
reach an OA lower than 65%, while the results obtained by an SVM that is trained directly on the target labeled sam - |
ples (which are available for testing) would provide an ac - |
curacy of 90% ( Figure 10 ). Here, the shift is clearly visible |
and relates to a loss in accuracy of 25%. Using a random |
sampling in the target domain, we get a constant increase |
in performance [shown in Figure 10(a) by the green line with * markers]; however, after 300 queries, we are still 5% |
away from the classifier learned using only 500 samples |
from the target domain. Moreover, the learning rate is slow and the gain is almost linear with the number of queries. |
We then assess different DA strategies. |
First, TrAdaBoost is applied to the set enriched by the |
random samples [shown in Figure 10(a) by the brown line with |
# markers]. By forgetting the source domain, i.e., by |
downweighting the source samples that are contradictory with respect to the new samples from the target domain, |
we already see a significant improvement that fills half of the gap between the best case and the random sampling. |
However, when using AL (shown by the blue line with dia - |
mond markers) and even more when using it in conjunction |
with the TrAdaBoost model (shown by the black line with |
circle markers), the learning rate is much higher in the first |
iterations, which means the first queries are much more |
(a) (b) |
(c) (d) |
Figu Re 9. KSC data used in the AL DA experiment: (a) the source |
image, (b) the source GT, (c) the target image, and (d) the target GT |
(not available). |
Authorized licensed use limited to: ASU Library. Downloaded on March 08,2024 at 03:13:37 UTC from IEEE Xplore. Restrictions apply. |
june 2016 ieee Geoscience and remote sensin G ma Gazine 53 |
effective than in the random sampling experiments, and |
the model converges to the result obtained with 500 ran - |
dom target queries (shown by the solid blue line) with only 250 active queries, corresponding to a total of 750 samples in the model since we still have the 500 initial samples from |
the source. Figure 10(b) shows the percentage of the sup - |
port vectors from each domain with nonzero weights. In |
the target domain, this share increases constantly [shown |
in Figure 10(b) by the solid blue line], while it stabilizes for |
the samples from the source to 40% of the original train - |
ing samples (shown by the dashed red line). This means |
that the importance of the source in the model is strongly |
reduced in the first iterations and then remains constant, while each new sample from the target becomes immedi - |
ately important and receives a strong weight from the boost - |
ed SVM classifier. |
guiDeLineS FoR cH ooSing |
tHe aDaptation StRateg Y |
In this section, we will first provide guidelines for the selec - |
tion of the most appropriate adaptation strategy and then |
discuss the issue of the validation of the adapted models. |
HOW TO CHOOSE |
In the previous sections, we presented different approach - |
es to DA that were grouped into four families. Depending |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.