References
- 1Aggarwal, A, Mittal, M and Battineni, G. 2021. ‘Generative adversarial network: An overview of theory and applications.’ International Journal of Information Management Data Insights, 1(1):
100004 . DOI: 10.1016/j.jjimei.2020.100004 - 2American Numismatic Society. 2023a. ‘American numismatic society.’
https://numismatics.org/ [Accessed: (December 06, 2023)]. - 3American Numismatic Society. 2023b. ‘Online coins of the roman empire.’
https://numismatics.org/ocre/ , [Accessed: (September 06, 2023)]. - 4ArcGIS. 2023. ‘How cyclegan works?’
https://developers.arcgis.com/python/guide/how-cyclegan-works/ [Accessed: (December 18, 2023)]. - 5Berlin, M. 2023. ‘Münzkabinett online catalogue.’
https://ikmk.smb.museum/home , [Accessed: (September 06, 2023)]. - 6Borji, A. 2019. ‘Pros and cons of gan evaluation measures.’ Computer Vision and Image Understanding, 179: 41–65. DOI: 10.1016/j.cviu.2018.10.009
- 7British Museum. 2023. ‘British museum collections.’
https://www.britishmuseum.org/collection [Accessed: (September 06, 2023)]. - 8Bruno, F, Bruno, S, De Seni, G, Luchi, ML, Mancuso, S and Muzzupappa, M. 2010. ‘From 3d reconstruction to virtual reality: A complete methodology for digital archaeological exhibition.’ Journal of Cultural Heritage, 11(1): 42–49. DOI: 10.1016/j.culher.2009.02.006
- 9Chang, X, Chao, F, Shang, C and Shen, Q. 2022. ‘Sundial-gan: A cascade generative adversarial networks framework for deciphering oracle bone inscriptions.’ In: Proceedings of the 30th ACM International Conference on Multimedia, Lisboa Portugal,
ACM ,10 October 2022 , pp. 1195–1203. DOI: 10.1145/3503161.3547925 - 10Choi, S-Y, Jeong, H-J, Park, K-S and Ha, Y-G. 2019. ‘Efficient driving scene image creation using deep neural network.’ In: 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), Kyoto, Japan,
IEEE ,February 2019 , pp. 1–4. DOI: 10.1109/BIGCOMP.2019.8679269 - 11Colmenero-Fernández, A and Feito, F. 2021. ‘Image processing for graphic normalisation of the ceramic profile in archaeological sketches making use of deep neuronal net (dnn).’ Digital Applications in Archaeology and Cultural Heritage, 22:
e00196 . DOI: 10.1016/j.daach.2021.e00196 - 12Farahanipad, F, Rezaei, M, Nasr, MS, Kamangar, F and Athitsos, V. 2022. ‘A survey on gan-based data augmentation for hand pose estimation problem.’ Technologies, 10(2):
43 . DOI: 10.3390/technologies10020043 - 13Garozzo, R, Santagati, C, Spampinato, C and Vecchio, G. 2021. ‘Knowledge-based generative adversarial networks for scene understanding in cultural heritage.’ Journal of Archaeological Science: Reports, 35:
102736 . DOI: 10.1016/j.jasrep.2020.102736 - 14Ghosh, B, Dutta, IK, Carlson, A, Totaro, M and Bayoumi, M. 2020. ‘An empirical analysis of generative adversarial network training times with varying batch sizess.’ In: 2020 11th IEEE Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON),
IEEE , pp. 643–648. DOI: 10.1109/UEMCON51285.2020.9298092 - 15Gonog, L and Zhou, Y. 2019. ‘A review: Generative adversarial networks.’ In: 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), Xi’an, China,
IEEE ,June 2019 , pp. 505–510. DOI: 10.1109/ICIEA.2019.8833686 - 16Goodfellow, I, Pouget-Abadie, J, Mirza, M, Xu, B, Warde-Farley, D, Ozair, S, Courville, A and Bengio, Y. 2014.
‘Generative adversarial nets.’ In: Advances in neural information processing systems, MIT Press, pp. 2672–2680. - 17Gragnaniello, D, Cozzolino, D, Marra, F, Poggi, G and Verdoliva, L. 2021. ‘Are gan generated images easy to detect? a critical analysis of the state-of-the-art.’ In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. DOI: 10.1109/ICME51207.2021.9428429
- 18Harms, J, Lei, Y, Wang, T, Zhang, R, Zhou, J, Tang, X, Curran, W, Liu, T and Yang, X. 2019. ‘Paired cycle-gan-based image correction for quantitative cone-beam computed tomography.’ Medical Physics, 46: 3998–4009. DOI: 10.1002/mp.13656
- 19Hedjazi, MA and Genc, Y. 2021. ‘Efficient texture-aware multi-gan for image inpainting.’ Knowledge-Based Systems, 217:
106789 . DOI: 10.1016/j.knosys.2021.106789 - 20Hermoza, R and Sipiran, I. 2018a. ‘3d reconstruction of incomplete archaeological objects using a generative adversarial network.’ In: Proceedings of Computer Graphics International 2018, CGI 2018, New York, NY, USA,
Association for Computing Machinery , p. 5–11. DOI: 10.1145/3208159.3208173 - 21Hermoza, R and Sipiran, I. 2018b. ‘3d reconstruction of incomplete archaeological objects using a generative adversarial network.’ In: Proceedings of Computer Graphics International 2018, Bintan Island Indonesia,
ACM ,11 June 2018 , pp. 5–11. DOI: 10.1145/3208159.3208173 - 22Jiang, Z and Sweetser, P. 2022.
‘Gan-assisted yuv pixel art generation.’ In: Long, G, Yu, X and Wang, S (eds.) AI 2021: Advances in Artificial Intelligence. Lecture Notes in Computer Science. Springer International Publishing, pp. 595–606. DOI: 10.1007/978-3-030-97546-3_48 - 23Kleber, F and Sablatnig, R. 2009. ‘A survey of techniques for document and archaeology artefact reconstruction.’ In: 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 2009, IEEE, pp. 1061–1065. DOI: 10.1109/ICDAR.2009.154
- 24Kniaz, V, Remondino, F and Knyaz, VA. 2019. ‘Gnerative adversarial networks for single photo 3d reconstruction.’ The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42(2): 403–408. DOI: 10.5194/isprs-archives-XLII-2-W9-403-2019
- 25Koutsoudis, A, Vidmar, B and Arnaoutoglou, F. 2013. ‘Performance evaluation of a multi-image 3d reconstruction software on a low-feature artefact.’ Journal of Archaeological Science, 40(12): 4450–4456. DOI: 10.1016/j.jas.2013.07.007
- 26Kunsthistorisches Museum Wien. 2023. ‘Kunsthistorisches museum wien.’
https://www.ikmk.at/home?lang=en [Accessed: (September 06, 2023)]. - 27Kurach, K, Lučić, M, Zhai, X, Michalski, M and Gelly, S. 2019. ‘A large-scale study on regularization and normalization in GANs.’ In: Chaudhuri, K and Salakhutdinov, R (eds.) Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research,
PMLR , pp. 3581–3590. - 28Langr, J and Bok, V. 2015. GANs in Action: Deep Learning with Generative Adversarial Networks. Shelter Island, New York: Manning Publications.
- 29LeCun, Y, Bengio, Y and Hinton, G. 2015. ‘Deep learning.’ Nature, 521(7553): 436–444. DOI: 10.1038/nature14539
- 30Lee, K-H and Yun, GJ. 2024. ‘Microstructure reconstruction using diffusion-based generative models.’ Mechanics of Advanced Materials and Structures, 31(18): 4443–4461. DOI: 10.1080/15376494.2023.2198528
- 31Moreno-Barea, FJ, Jerez, JM and Franco, L. 2020. ‘Improving classification accuracy using data augmentation on small data sets.’ Expert Systems with Applications, 161:
113696 . DOI: 10.1016/j.eswa.2020.113696 - 32Münster, S, Maiwald, F, Di Lenardo, I, Henriksson, J, Isaac, A, Graf, MM, Beck, C and Oomen, J. 2024. ‘Artificial intelligence for digital heritage innovation: Setting up a r&d agenda for europe.’ Heritage, 7(2): 794–816. DOI: 10.3390/heritage7020038
- 33Navarro, P, Cintas, C, Lucena, M, Fuertes, JM, Segura, R, Delrieux, C and González-José, R. 2022. ‘Reconstruction of iberian ceramic potteries using generative adversarial networks.’ Scientific Reports, 12(1):
10644 . - 34Nomisma. 2024. ‘Nomisma.org.’
http://nomisma.org/ , [Accessed: (February 27, 2024)]. DOI: 10.1038/s41598-022-14910-7 - 35Park, S, Kim, J, Park, J, Jung, S-H and Sim, C. 2023. ‘How to train your pre-trained gan models.’ Applied Intelligence, 53(22): 27001–26. DOI: 10.1007/s10489-023-04807-x
- 36Parmar, G, Zhang, R and Zhu, J-Y. 2022. ‘On aliased resizing and surprising subtleties in gan evaluation.’ In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 11410–20. DOI: 10.1109/CVPR52688.2022.01112
- 37Portable Antiquities Scheme. 2023. ‘Portable antiquities scheme.’
https://finds.org.uk/database/search/results/ [Accessed: (September 06, 2023)]. - 38Tang, S. 2020. ‘Lessons learned from the training of gans on artificial datasets.’ In: IEEE Access, IEEE, pp. 165044–55. DOI: 10.1109/ACCESS.2020.3022820
- 39Ulyanov, D, Vedaldi, A and Lempitsky, V. 2017. ‘Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis.’ In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI,
IEEE ,July 2017 , pp. 4105–4113. DOI: 10.1109/CVPR.2017.437 - 40Univerity of Vienna. 2023. ‘University of vienna digital coin cabinet.’
https://ikmk-ing.univie.ac.at/home?lang=en [Accessed: (September 06, 2023)]. - 41Wach, K, Duong, CD, Ejdys, J, Kazlauskaitė, P, Korzynski, R, Mazurek, G, Paliszkiewicz, J and Ziemba, E. 2023. ‘The dark side of generative artificial intelligence: A critical analysis of controversies and risks of chatgpt.’ Entrepreneurial Business and Economics Review, 11(2): 7–30. DOI: 10.15678/EBER.2023.110201
- 42Wang, Y, Luo, Y, Zu, C, Zhan, B, Jiao, Z, Wu, X, Zhou, J, Shen, D and Zhou, L. 2024. ‘3d multi-modality transformer-gan for high-quality pet reconstruction.’ Medical Image Analysis, 91(102983): 7–30. DOI: 10.1016/j.media.2023.102983
- 43Xu, D, Fan, S and Kankanhalli, M. 2023. ‘Combating misinformation in the era of generative ai models.’ In: Proceedings of the 31st ACM International Conference on Multimedia. Presented at the MM ’23: The 31st ACM International Conference on Multimedia, ACM, Ottawa ON Canada,
ACM , pp. 9291–9298. DOI: 10.1145/3581783.3612704 - 44Ya-Liang, C, Zhe Yu, L, Kuan-Ying, L and Winston, H. 2019. ‘Free-form video inpainting with 3d gated convolution and temporal patchgan.’ In: IEEE/CVF International Conference on Computer Vision (ICCV),
IEEE , pp. 9066–9075. - 45Yan, N, Mei, Y, Xu, L, Yu, H, Sun, B, Wang, Z and Chen, Y. 2023. ‘Deep learning on image stitching with multi-viewpoint images: A survey.’ Neural Processing Letters, 55(4): 3863–3898. DOI: 10.1007/s11063-023-11226-z
- 46Zachariou, M, Dimitriou, N and Arandjelović, O. 2020. ‘Visual reconstruction of ancient coins using cycle-consistent generative adversarial networks.’ Sci, 2(3):
52 . DOI: 10.3390/sci2030052 - 47Zeng, X, Cheng, L, Li, S and Liu, X. 2024. ‘Archaeology drawing generation algorithm based on multi-branch feature cross fusion.’ This is a preprint but not yet published. DOI: 10.21203/rs.3.rs-4409621/v1
- 48Zhang, E and Banovic, N. 2021. ‘Method for exploring generative adversarial networks (gans) via automatically generated image galleries.’ In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. DOI: 10.1145/3411764.3445714
- 49Zhang, Y, Seibert, P, Otto, A, Raßloff, A, Ambati, M and Kästner, M. 2024. ‘Da-vegan: Differentiably augmenting vae-gan for microstructure reconstruction from extremely small data sets.’ Computational Materials Science, 232(112661). DOI: 10.1016/j.commatsci.2023.112661
- 50Zhou, S, Xiao, T, Yang, Y, Feng, D, He, Q and He, W. 2017. ‘Genegan: Learning object transfiguration and attribute subspace from unpaired data.’ CoRR abs/1705.04932. DOI: 10.5244/C.31.111
- 51Zhu, J-Y, Park, T, Isola, P and Efros, AA. 2020. ‘Unpaired image-to-image translation using cycle-consistent adversarial networks.’ In: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 2242–2251. DOI: 10.1109/ICCV.2017.244
