References
- Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets, Advances in neural information processing systems (2014), pp. 2672–2680.
- M. Mirza, S. Osindero, Conditional Generative Adversarial Nets, Computer Science (2014) pp.2672–2680.
- P. Isola, J.-Y. Zhu, T. Zhou, et al. Image-to-image translation with conditional adversarial networks, in the IEEE conference on computer vision and pattern recognition (CVPR) (2017), pp. 1125–1134.
- T. C. Wang, M. Y. Liu, J. Y. Zhu, A. Tao, J. Kautz, B. Catanzaro, High-resolution image synthesis and semantic manipulation with conditional GANs, In the IEEE conference on computer vision and pattern recognition (CVPR) (2018), pp. 8798–8807.
- M. Zhai, L. Chen, F. Tung, J. He, M. Nawhal, G. Mori, Lifelong gan: Continual learning for conditional image generation. In the IEEE International Conference on Computer Vision(ICCV) (2019), pp. 2759–2768.
- D. Bau, H. Strobelt, W. Peebles, et al. Semantic photo manipulation with a generative image prior[J]. arXiv preprint arXiv:2005.07727, 2020.
- X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, Infogan: Interpretable representation learning by information maximizing generative adversarial nets, Advances in neural information processing systems (NIPS) (2016), pp. 2172–2180.
- J.Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, E. Shechtman, Toward multimodal image-to-image translation, Advances in neural information processing systems (NIPS) (2017), pp. 465–476.
- J. Y. Zhu, T. Park, P. Isola, et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, In the IEEE international conference on computer vision (ICCV) (2017), pp. 2223–2232.
- W. Xian, P. Sangkloy, V. Agrawal, et al. Texturegan: Controlling deep image synthesis with texture patches, In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8456–8465.
- Y. Lu, S. Wu, Y. W. Tai, et al., Image generation from sketch constraint using contextual gan, in the European Conference on Computer Vision (ECCV) (2018), pp. 205–220.
- A. Gonzalez-Garcia, J. Van De Weijer, Y. Bengio, Image-to-image translation for cross-domain disentanglement, Advances in neural information processing systems (NIPS) (2018), pp. 1287–1298.
- H. Tang, D. Xu, G. Liu, W. Wang, N. Sebe, Y. Yan, Cycle in cycle generative adversarial networks for keypoint-guided image generation. In the 27th ACM International Conference on Multimedia (2019, October), pp. 2052–2060.
- Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, L. Carin, Triangle generative adversarial networks. In NIPS. (2017) pp. 5253–5262.
- Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR. (2018)
- Huang X, Liu M Y, Belongie S, et al. Multimodal unsupervised image-to-image translation. In ECCV. (2018)
- M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017.
- Taigman, Y., Polyak, A., Wolf, L. Unsupervised cross-domain image generation. In ICLR. (2017)
- K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, D. Krishnan, Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR. (2017)
- E. Hosseini-Asl, Y. Zhou, C. Xiong, R. Socher, Augmented cyclic adversarial learning for low resource domain adaptation (2018). arXiv preprint arXiv:1807.00374.
- M. Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, J. Kautz, Few-shot unsupervised image-to-image translation, In the IEEE International Conference on Computer Vision (ICCV) (2019), pp. 10551–10560.
- T. C. Wang, M. Y. Liu, A. Tao, G. Liu, J. Kautz, B. Catanzaro, Few-shot video-to-video synthesis, (2019) arXiv preprint arXiv:1910.12713.
- A. Torralba, Contextual priming for object detection, International journal of computer vision, 2003, 53(2): 169–191.
- X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV. (2016)
- D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR. (2016)
- D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixel-level domain transfer. In ECCV. (2016)
- K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. In the IEEE con ference on computer vision and pattern recognition (CVPR) (2016), pp. 770–778.