papers AI Learner
The Github is limit! Click to go to the new site.

Temporal Attentive Alignment for Video Domain Adaptation

2019-05-26
Chen, Min-Hung, Kira, Zsolt, AlRegib, Ghassan

Abstract

Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose a larger-scale dataset with larger domain discrepancy: UCF-HMDB_full. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on three video DA datasets. We plan to release the code and datasets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.10861

PDF

http://arxiv.org/pdf/1905.10861


Similar Posts

Comments