Skip to main content
Log in

Improving Open Set Domain Adaptation Using Image-to-Image Translation and Instance-Weighted Adversarial Learning

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space. Our approach, called Open Set Translation and Adaptation Network (OSTAN), consists of two main components: translation and adaptation. The translation is a cycle-consistent generative adversarial network, which translates any source image to the “style” of a target domain to eliminate domain discrepancy in the pixel space. The adaptation is an instance-weighted adversarial network, which projects both (labeled) translated source images and (unlabeled) target images into a domain-invariant feature space to learn a prior probability for each target image. The learned probability is applied as a weight to the unknown classifier to facilitate the identification of the unknown class. The proposed OSTAN model significantly outperforms the state-of-the-art open set domain adaptation methods on multiple public datasets. Our experiments also demonstrate that both the image-to-image translation and the instance-weighting framework can further improve the decision boundaries for both known and unknown classes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Xie C H, Tan M X, Gong B Q, Wang J, Yuille A L, Le Q V. Adversarial examples improve image recognition. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.819–828. https://doi.org/10.1109/CVPR42600.2020.00090.

  2. Deng W J, Zheng L, Ye Q X, Kang G L, Yang Y, Jiao J B. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp.994–1003. https://doi.org/10.1109/CVPR.2018.00110.

  3. Wang Y Y, Gu J M, Wang C, Chen S C, Xue H. Discrimination-aware domain adversarial neural network. Journal of Computer Science and Technology, 2020, 35(2): 259–267. https://doi.org/10.1007/s11390-020-9969-4.

    Article  Google Scholar 

  4. Lu H, Shen C H, Cao Z G, Xiao Y, van den Hengel A. An embarrassingly simple approach to visual domain adaptation. IEEE Trans. Image Processing, 2018, 27(7): 3403–3417. https://doi.org/10.1109/TIP.2018.2819503.

    Article  MathSciNet  MATH  Google Scholar 

  5. Saito K, Yamamoto S, Ushiku Y, Harada T. Open set domain adaptation by backpropagation. In Proc. the 15th European Conference on Computer Vision, Sept. 2018, pp.156–171. https://doi.org/10.1007/978-3-030-01228-1_10.

  6. Panareda Busto P, Gall J. Open set domain adaptation. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.754–763. https://doi.org/10.1109/ICCV.2017.88.

  7. Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.7167–7176. https://doi.org/10.1109/CVPR.2017.316.

  8. Ganin Y, Lempitsky V. Unsupervised domain adaptation by backpropagation. In Proc. the 32nd International Conference on Machine Learning, Jun. 2015, pp.1180–1189.

  9. Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.2223–2232. https://doi.org/10.1109/ICCV.2017.244.

  10. Hoffman J, Tzeng E, Park T, Zhu J Y, Isola P, Saenko K, Efros A A, Darrell T. CyCADA: Cycle-consistent adversarial domain adaptation. In Proc. the 35th International Conference on Machine Learning, Jul. 2018, pp.1989–1998.

  11. Zhang H J, Li A, Han X, Chen Z M, Zhang Y, Guo Y W. Improving open set domain adaptation using image-to-image translation. In Proc. the 2019 International Conference on Multimedia and Expo, Jul. 2019, pp.1258–1263. https://doi.org/10.1109/ICME.2019.00219.

  12. Rozantsev A, Salzmann M, Fua P. Beyond sharing weights for deep domain adaptation. IEEE Trans. Pattern Analysis and Machine Intelligence, 2019, 41(4): 801–814. https://doi.org/10.1109/TPAMI.2018.2814042.

  13. Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In Proc. the 11th European Conference on Computer Vision, Sept. 2010, pp.213–226. https://doi.org/10.1007/978-3-642-15561-1_16.

  14. Gopalan R, Li R N, Chellappa R. Domain adaptation for object recognition: An unsupervised approach. In Proc. the 2011 International Conference on Computer Vision, Nov. 2011, pp.999–1006. https://doi.org/10.1109/ICCV.2011.6126344.

  15. Cui S H, Wang S H, Zhuo J B, Su C, Huang Q M, Tian Q. Gradually vanishing bridge for adversarial domain adaptation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.12455–12464. https://doi.org/10.1109/CVPR42600.2020.01247.

  16. Tang H, Chen K, Jia K. Unsupervised domain adaptation via structurally regularized deep clustering. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.8725–8735. https://doi.org/10.1109/CVPR42600.2020.00875.

  17. Lu Z H, Yang Y X, Zhu X T, Liu C, Song Y Z, Xiang T. Stochastic classifiers for unsupervised domain adaptation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.9111–9120. https://doi.org/10.1109/CVPR42600.2020.00913.

  18. Long M S, Wang J M, Ding G G, Sun J G, Yu P S. Transfer joint matching for unsupervised domain adaptation. In Proc. the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2014, pp.1410–1417. https://doi.org/10.1109/CVPR.2014.183.

  19. Bousmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.3722–3731. https://doi.org/10.1109/CVPR.2017.18.

  20. Gretton A, Borgwardt K, Rasch M, Schölkopf B, Smola A. A kernel method for the two-sample-problem. In Advances in Neural Information Processing Systems 19, Schölkopf B, Platt J, Hofmann T (eds.), The MIT Press, 2007, pp.513–520. https://doi.org/10.7551/mitpress/7503.003.0069.

  21. Long M S, Zhu H, Wang J M, Jordan M I. Deep transfer learning with joint adaptation networks. In Proc. the 34th International Conference on Machine Learning, Jul. 2017, pp.2208–2217.

  22. Yu Y, Gong Z Q, Zhong P, Shan J X. Unsupervised representation learning with deep convolutional neural network for remote sensing images. In Proc. the 9th International Conference on Image and Graphics, Sept. 2017, pp.97–108. https://doi.org/10.1007/978-3-319-71589-6_9.

  23. Liu M Y, Tuzel O. Coupled generative adversarial networks. In Proc. the 30th International Conference on Neural Information Processing Systems, Dec. 2016, pp.469– 477.

  24. Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W D, Webb R. Learning from simulated and unsupervised images through adversarial training. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.2107–2116. https://doi.org/10.1109/CVPR.2017.241.

  25. Isola P, Zhu J Y, Zhou T H, Efros A A. Image-to-image translation with conditional adversarial networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.1125–1134. https://doi.org/10.1109/cvpr.2017.632.

  26. Bendale A, Boult T E. Towards open set deep networks. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp.1563–1572. https://doi.org/10.1109/CVPR.2016.173.

  27. Jain L P, Scheirer W J, Boult T E. Multi-class open set recognition using probability of inclusion. In Proc. the 13th European Conference on Computer Vision, Sept. 2014, pp.393–409. https://doi.org/10.1007/978-3-319-10578-9_26.

  28. Li F Y, Wechsler H. Open set face recognition using transduction. IEEE Trans. Pattern Analysis and Machine Intelligence, 2005, 27(11): 1686–1697. https://doi.org/10.1109/TPAMI.2005.224.

  29. Bao J M, Chen D, Wen F, Li H Q, Hua G. Towards open-set identity preserving face synthesis. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp.6713–6722. https://doi.org/10.1109/CVPR.2018.00702.

  30. Rudd E M, Jain L P, Scheirer W J, Boult T E. The extreme value machine. IEEE Trans. Pattern Analysis and Machine Intelligence, 2018, 40(3): 762–768. https://doi.org/10.1109/TPAMI.2017.2707495.

  31. Zhang H J, Li A, Guo J, Guo Y W. Hybrid models for open set recognition. In Proc. the 16th European Conference on Computer Vision, Aug. 2020, pp.102–117. https://doi.org/10.1007/978-3-030-58580-8_7.

  32. Cui X Y, Liu Q S, Gao M C, Metaxas D N. Abnormal detection using interaction energy potentials. In Proc. the 2011 CVPR, Jun. 2011, pp.3161–3167. https://doi.org/10.1109/CVPR.2011.5995558.

  33. Zhou D Y, Bousquet O, Lal T N, Weston J, Schölkopf B. Learning with local and global consistency. In Proc. the 16th International Conference on Neural Information Processing Systems, Dec. 2003, pp.321–328.

  34. Peng X C, Usman B, Kaushik N, Hoffman J, Wang D Q, Saenko K. VisDA: The visual domain adaptation challenge. arXiv: 1710.06924, 2017. https://arxiv.org/abs/1710.06924, May 2023.

  35. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv: 1409.1556, 2014. https://arxiv.org/abs/1409.1556, May 2023.

  36. He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp.770–778. https://doi.org/10.1109/CVPR.2016.90.

  37. Gong B Q, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In Proc. the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2012, pp.2066–2073. https://doi.org/10.1109/CVPR.2012.6247911.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan-Wen Guo.

Supplementary Information

ESM 1

(PDF 587 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, HJ., Li, A., Guo, J. et al. Improving Open Set Domain Adaptation Using Image-to-Image Translation and Instance-Weighted Adversarial Learning. J. Comput. Sci. Technol. 38, 644–658 (2023). https://doi.org/10.1007/s11390-021-1073-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-021-1073-x

Keywords

Navigation