References
- 1Abadi, A and Berrada, AM. 2018. ‘Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)’. IEEE Access, 6: 52138–52160. DOI: 10.1109/ACCESS.2018.2870052
- 2Abellán, N, Baquedano, E and Domínguez-Rodrigo, M. 2022. ‘High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms’. Geobios, 72–73: 12–21. DOI: 10.1016/j.geobios.2022.07.001
- 3Abellán, N, Jiménez-García, B, Aznarte, J, Baquedano, E and Domínguez-Rodrigo, M. 2021. ‘Deep learning classification of tooth scores made by different carnivores: achieving high accuracy when comparing African carnivore taxa and testing the hominin shift in the balance for power’. Archaeological and Anthropological Sciences, 13: 31. DOI: 10.1007/s12520-021-01273-9
- 4Agrawal, A, Lu, J, Antol, S, Mitrchell, M, Zitnick, CL, Batra, D and Parikh, D. 2016. ‘VQA: Visual Question Answering’. IEEE International Conference on Computer Vision, 2425–2433. DOI: 10.1109/ICCV.2015.279
- 5Aiello, LC and Wheeler, P. 1995. ‘The expensive-tissue hypothesis: The brain and the digestive system in human and primate evolution’. Current Anthropology, 36: 199–221. DOI: 10.1086/204350
- 6Ali, U and Mahmood, MT. 2018. ‘Analysis of blur measure operator for single image blur segmentation’. Applied Sciences, 8(5): 807. DOI: 10.3390/app8050807
- 7Andrews, P and Cook, J. 1985. ‘Natural modifications to bones in a temperate setting’. Man, 20(4): 675–691. DOI: 10.2307/2802756
- 8Arman, SD, Ungar, PS, Brown, CA, DeSantis, LRG, Schmidt, C and Prideaux, GJ. 2016. ‘Minimizing inter-microscope variability in dental microwear texture analysis’. Surface Topography: Metrology and Properties, 4(2): 024007. DOI: 10.1088/2051-672X/4/2/024007
- 9Athanasiadou, E, Geradts, Z and Eijk, EV. 2018. ‘Camera recognition with deep learning’. Forensic Sciences Research, 3(3): 210–218. DOI: 10.1080/20961790.2018.1485198
- 10Baquedano, E, Domínguez-Rodrigo, M and Musiba, C. 2012. ‘An experimental study of large mammal bone modification by crocodiles and its bearing on the interpretation of crocodile predation at FLK Zinj and FLK NN3’. Journal of Archaeological Science, 39: 1728–1737. DOI: 10.1016/j.jas.2012.01.010
- 11Barr, WA, Pobiner, B, Rowan, J, Du, A and Faith, JT. 2022. No sustained increase in zooarchaeological evidence for carnivory after the appearance of Homo erectus. 119(5):
e2115540119 . DOI: 10.1073/pnas.2115540119 - 12Ben-David, S, Blitzer, J, Crammer, K, Kulesza, A, Pereira, F and Vaughan, JW. 2010. ‘A theory of learning from different domains’. Machine Learning, 79(1–2): 151–175. DOI: 10.1007/s10994-009-5152-4
- 13Bengio, Y. 2012. ‘Deep learning of representations for unsupervised and transfer learning’. JMLR Workshop and Conference Proceedings, 27: 17–37. DOI: 10.1007/978-3-642-39593-2_1
- 14Binford, LR. 1981. Bones: Ancient Men and Modern Myths. New York: Academic Press Inc.
- 15Bishop, CM. 1991. ‘Novelty detection and neural network validation’. International Conference on Artificial Neural Networks, 789–794. DOI: 10.1007/978-1-4471-2063-6_225
- 16Bishop, CM. 1995. Neural networks for pattern recognition. Oxford: Claredon Press. DOI: 10.1093/oso/9780198538493.001.0001
- 17Bishop, CM. 2006. Pattern recognition and machine learning. Singapore: Springer.
- 18Blumenschine, R. 1995. ‘Percussion Marks, Tooth Marks and Experimental Determinations of the Timing of Hominid and Carnivore Access to Long Bones at FLK Zinjanthropus, Olduvai Gorge, Tanzania’. Journal of Human Evolution, 29(1): 21–51. DOI: 10.1006/jhev.1995.1046
- 19Borel, A, Ollé, A, Vergès, JM and Sala, R. 2014. ‘Scanning electron and optical light microscopy: two complementary approaches for the understanding and interpretation of usewear and residues on stone tools’. Journal of Archaeological Science, 48: 46–59. DOI: 10.1016/j.jas.2013.06.031
- 20Brownlee, J. 2017. Deep Learning with Python. Melbourne: Machine Learning Mastery.
- 21Bunn, HT. 1991. ‘A taphonomic perspective on the archaeology of human origins’. Annual Review in Anthropology, 20: 433–467. DOI: 10.1146/annurev.anthro.20.1.433
- 22Byeon, W, Domínguez-Rodrigo, M, Arampatzis, G, Baquedano, E, Yravedra, J, Maté-González, MÁ and Koumoutsakos, P. 2019. ‘Automated identification and deep classification of cut marks on bones and its paleoanthropological implications’. Journal of Computational Science, 32: 36–43. DOI: 10.1016/j.jocs.2019.02.005
- 23Calandra, I, Schunk, L, Bob, K, Gneisinger, W, Pedergnana, A, Paixao, E, Hildebrandt, A and Marreiros, J. 2019. ‘The effect of numerical aperture on quantitative use-wear studies and its implication on reproducibility’. Scientific Reports. DOI: 10.1038/s41598-019-42713-w
- 24Calder, J, Coil, R, Melton, A, Olver, PJ, Tostevin, G and Yezzi-Woodley, K. 2022. ‘Use and misuse of machine learning in anthropology’. IEEE BITS the Information Theory Magazine, 2(1): 102–115. DOI: 10.1109/MBITS.2022.3205143
- 25Canny, J. 1986. ‘A computational approach to edge detection’. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6): 679–698. DOI: 10.1109/TPAMI.1986.4767851
- 26Carlini, N and Wagner, D. 2017. ‘Towards evaluating the robustness of neural networks’. IEEE Symposium on Security and Privacy, 39–57. DOI: 10.1109/SP.2017.49
- 27Chen, X, Zheng, B and Liu, H. 2011. ‘Optical and digital microscopic imaging techniques and applications in pathology’. Analytical Cellular Pathology, 34(1–2): 5–18. DOI: 10.1155/2011/150563
- 28Chollet, F. 2017. Deep Learning with Python. New York: Manning.
- 29Cifuentes-Alcobendas, G and Domínguez-Rodrigo, M. 2019. ‘Deep learning and taphonomy: high accuracy in the classification of cut marks made on fleshed and defleshed bones using convolutional neural networks’. Scientific Reports, 9: 18933. DOI: 10.1038/s41598-019-55439-6
- 30Cifuentes-Alcobendas, G and Domínguez-Rodrigo, M. 2021. ‘More than meets the eye: use of computer vision algorithms to identify stone tool material through the analysis of cut mark micro-morphology’. Archaeological and Anthropological Sciences, 13: 167. DOI: 10.1007/s12520-021-01424-y
- 31Cobo-Sánchez, L, Pizarro-Monzo, M, Cifuentes-Alcobendas, G, Jiménez-García, B, Abellán-Beltrán, N, Courtenay, LA, Mabulla, A, Baquedano, E and Domínguez-Rodrigo, M. 2022. ‘Computer vision supports primary Access to meat by early Homo, 1.84 million years ago’. PeerJ, 10:
e1418 . DOI: 10.7717/peerj.14148 - 32Courtenay, LA. 2023. ‘Can we restore balance to geometric morphometrics? A theoretical evaluation of how sample imbalance conditions ordination and classification’. Evolutionary Biology, 50: 90–110. DOI: 10.1007/s11692-022-09590-0
- 33Courtenay, LA, Herranz-Rodrigo, D, González-Aguilera, D and Yravedra, J. 2021. ‘Developments in data science solutions for carnivore tooth pit classification’. Scientific Reports, 11: 10209. DOI: 10.1038/s41598-021-89518-4
- 34Da Rin, G, Seghezzi, M, Padoan, A, Pajola, R, Bengiamo, A, Di Fabio, AM, Dima, F, Fanelli, A, Francione, S, Germagnoli, L, Lorubbio, M, Marzoni, A, Pipitone, S, Rolla, R, Bagorria Vaca, MC, Bartolini, A, Bonato, L, Sciacovelli, L and Buoro, S. 2022. ‘Multicentric evaluation of the variability of digital morphology performances also respect to the reference methods by optical microscopy’. International Journal of Laboratory Hematology, 44(6): 1040–1049. DOI: 10.1111/ijlh.13943
- 35Dablain, D, Jacobson, KN, Bellinger, C, Roberts, M and Chawla, NV. 2023. ‘Understanding CNN fragility when learning with imbalanced data’. Machine Learning. DOI: 10.1007/s10994-023-06326-9
- 36Deng, J, Berg, AC, Li, K and Fei-Fei, L. 2010. What does classifying more than 10,000 image categories tell us? In: Daniilidis, K, Margos, P and Paragios, N (eds.) Proceedings of the 11th European Conference on Computer Vision. Heidelberg:
Springer , 71–84. DOI: 10.1007/978-3-642-15555-0_6 - 37Deng, J, Dong, W, Socher, R, Li, LJ, Li, K and Fei-Fei, L. 2009. ‘Imagenet: A large-scale hierarchical image database’. IEEE Conference on Computer Vision and Pattern Recognition, 2009: 248–255. DOI: 10.1109/CVPR.2009.5206848
- 38Dhamija, AR, Günther, M and Boult, TE. 2018. ‘Reducing network agnostophobia’. Neural Information Processing Systems. 32: 1–10
- 39Diakonikolas, I, Kamath, G, Kane, DM, Li, J, Moitra, A and Stewart, A. 2017. ‘Being robust (in high dimensions) can be practical’. Proceedings of the International Conference on Machine Learning, 34: 1–10.
- 40Diakonikolas, I, Kamath, G, Kane, DM, Li, J, Moitra, A and Stewart, A. 2019. Robust estimators in high dimensions with the computational intractability. Available at:
https://arxiv.org/pdf/1604.06443.pdf [Last accessed 26/08/2023]. - 41Dodge, S and Karam, L. 2016. ‘Understanding how image quality affects deep neural networks’. Quality of Multimedia Experience, 8: 1–6. DOI: 10.1109/QoMEX.2016.7498955
- 42Domínguez-Rodrigo, M. 2015. ‘Taphonomy in early African archaeological sites: questioning some bone surface modification models for inferring fossil hominin and carnivore feeding interactions’. Journal of African Earth Sciences, 108: 42–46. DOI: 10.1016/j.jafrearsci.2015.04.011
- 43Domínguez-Rodrigo, M, Baquedano, E, Varela, L, Tambusso, PS, Melián, MJ and Fariá, RA. 2021c. ‘Deep classification of cut marks on bones from Arroyo del Vizcaíno (Uruguay)’. Proceedings of the Royal Society B, 288: 20210711. DOI: 10.1098/rspb.2021.0711
- 44Domínguez-Rodrigo, M, Barba, R and Egeland, CP. 2007. Deconstructing Olduvai: A taphomomic study of the Bed I sites. The Netherlands: Springer. DOI: 10.1007/978-1-4020-6152-3
- 45Domínguez-Rodrigo, M, Baquedano, E, Organista, E, Cobo-Sánchez, L, Mabulla, A, Maskara, V, Gidna, A, Pizarro-Monzo, M, Aramendi, J, Belén Galán, A, Cifuentes-Alcobendas, G, Vegara-Riquelme, M, Jiménez-García, B, Abellán, N, Barba, R, Uribelarrea, D, Martín-Perea, D, Diez-Martin, F, Maíllo-Fernández, JM, Rodríguez-Hidalgo, A, Courtenay, LA, Mora, R, Maté-González, MÁ and González-Aguilera, D. 2021a. ‘Early Pleistocene faunivorous hominins were not kleptoparasitic, and this impacted the evolution of human anatomy and socio-ecology’. Scientific Reports, 11: 16135. DOI: 10.1038/s41598-021-94783-4
- 46Domínguez-Rodrigo, M, Cifuentes-Alcobendas, G, Jiménez-García, B, Abellán, N, Pizarro-Monzo, M, Organista, E and Baquedano, E. 2020. ‘Artificial Intelligence provides greater accuracy in the classification of modern and ancient bone surface modifications’. Scientific Reports, 10: 18862. DOI: 10.1038/s41598-020-75994-7
- 47Domínguez-Rodrigo, M, Courtenay, LA, Cobo-Sánchez, L, Baquedano, E and Mabulla, A. 2021b. ‘A case of hominin scavenging 1.84 million years ago from Olduvai Gorge (Tanzania)’. Annals of the New York Academy of Sciences, 1510: 121–131. DOI: 10.1111/nyas.14727
- 48Domínguez-Rodrigo, M, Juana, S, Galán, AB and Rodríguez, M. 2009. ‘A new protocol to differentiate trampling marks from butchery marks’. Journal of Archaeological Science, 36(12): 2643–2654. DOI: 10.1016/j.jas.2009.07.017
- 49Domínguez-Rodrigo, M, Pizarro-Monzo, M, Cifuentes-Alcobendas, G, Vegara-Riquelme, M, Jiménez-García, B, Baquedano, E. 2024. ‘Computer visión enables taxon-specific identification of African carnivore tooth marks on bone’. Scientific Reports, 14: 6881. DOI: 10.1038/s41598-024-57015-z
- 50Domínguez-Rodrigo, M, Saladié, P, Cáceres, I, Huguet, R, Yravedra, J, Rodríguez-Hidalgo, A, Martín, P, Pineda, A, Marín, J, Gené, C, Aramendi, J and Cobo-Sánchez, L. 2017. ‘Use and abuse of cut mark analyses: the Rorschach effect’. Journal of Archaeological Science, 86: 14–23. DOI: 10.1016/j.jas.2017.08.001
- 51Domínguez-Solera, S and Domínguez-Rodrigo, M. 2011. ‘A taphonomic study of a carcass consumed by griffon vultures (Gyps fulvus) and its relevance for the interpretation of bone surface modifications’. Archaeological and Anthropological Sciences, 3: 385–392. DOI: 10.1007/s12520-011-0071-2
- 52Drumheller, SK and Brochu, CA. 2014. ‘A diagnosis of Alligator mississippiensis bite marks with comparisons to existing crocodylian datasets’. Ichnos, 21(2): 131–146. DOI: 10.1080/10420940.2014.909353
- 53Esteva, A, Kuprel, B, Navoa, RA, Ko, J, Swetter, SM, Blau, HM and Thrun, S. 2017. ‘Dermatologist-level classification of skin cancer with deep neural networks’. Nature letters, 542: 115–118. DOI: 10.1038/nature21056
- 54Fedorov, D, Sumengen, C and Manjunath, B. 2006. ‘Multi-focus imaging using local focus estimation and mosaicking’. IEEE International Conference on Image Processing, 2093–2096. DOI: 10.1109/ICIP.2006.312820
- 55Feris, R, Raskar, R, Tan, KH and Turk, M. 2006. ‘Specular highlights detection and reduction with multiflash photography’. Journal of the Brazilian Computer Society, 12: 35–42. DOI: 10.1590/S0104-65002006000200003
- 56Fink, M. 2004. ‘Object classification from a single example utilizing class relevance metrics’. Advances in Neural Information Processing Systems, 17: 449–456.
- 57Finn, C, Abbeel, P and Levine, S. 2017. ‘Model-agnostic meta-learning for fast adaptation of deep net-works’. International Conference on Machine Learning, 1126–1135. Available at:
https://arxiv.org/pdf/1703.03400.pdf [Last accessed 26/08/2023]. - 58Finn, C and Levine, S. 2017. ‘Deep visual foresight for planning robot motion’. IEEE International Conference on Robotics and Automation, 2786–2793. DOI: 10.1109/ICRA.2017.7989324
- 59Fortin, JP, Cullen, N, Sheline, YI, Taylor, WD, Aselcioglu, I, Cook, PA, Adams, P, Cooper, C, Fava, M, McGrath, PJ, McInnis, M, Phillips, ML, Trivedi, MH, Weissman, MM and Shinohara, RT. 2018. ‘Harmonization of cortical thickness measurements across scanners and sites’. NeuroImage, 167: 104–120. DOI: 10.1016/j.neuroimage.2017.11.024
- 60George, D, Lehrach, W, Kansky, K, Lázaro-Gredilla, M, Laan, C, Marthi, B, Lou, X, Meng, Z, Liu, Y, Wang, H, Lavin, A and Phoenix, DS. 2017. ‘A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs’. Science, 358(6368):
eaag2612 . DOI: 10.1126/science.aag2612 - 61Glorot, X and Bengio, Y. 2010. ‘Understanding the difficulty of training deep feedforward neural networks’. Artificial Intelligence and Statistics, 9: 249–256
- 62Gong, Y, Liu, G, Xue, Y, Li, R and Meng, L. 2023. ‘A survey on dataset quality in machine learning’. Information and Software Technology, 162: 107268. DOI: 10.1016/j.infsof.2023.107268
- 63González-Aguilera, D. 2005.
‘Reconstrucción 3D a partir de una sola vista’ . PhD Thesis, Universidad de Salamanca. - 64González-Aguilera, D, López-Fernández, L, Rodriguez-Gonzalvez, P, Hernandez-Lopez, D, Guerrero, D, Remondino, F, Menna, F, Nocerino, E, Toschi, I, Ballabeni, A and Gaiani, M. 2018. ‘Graphos – open-source software for photogrammetric applications’. The Photogrammetric Record, 33(161): 11–19. DOI: 10.1111/phor.12231
- 65González-Aguilera, D, Ruiz de Oña, E, López-Fernandez, L, Farella, EM, Stathopoulou, EK, Toschi, I, Romondino, F, Rodríguez-Gonzálvez, P, Hernández-López, D, Fusiello, A and Nex, F. 2020. ‘Photomatch: an open-source multi-view and multi-modal feature matching tolos for photogrammetric applications’. International Archive of Photogrammetry and Remote Snesing Spatial Information Sciences, XLIII(B5): 213–219. DOI: 10.5194/isprs-archives-XLIII-B5-2020-213-2020
- 66Goodfellow, IJ, Bengio, Y and Courville, A. 2016. Deep Learning. Cambridge, Massachusetts: MIT Press.
- 67Goodfellow, IJ, Schlens, J and Szegedy, C. 2015. ‘Explaining and harnessing adversarial examples’. International Conference on Learning Representations. Available online:
https://arxiv.org/pdf/1412.6572.pdf [Last accessed 26/08/2023]. - 68Gümrükçu, M and Pante, MC. 2018. ‘Assessing the effects of fluvial abrasion on bone surface modifications using high-resolution 3D scanning’. Journal of Archaeological Science Reports, 21: 208–221. DOI: 10.1016/j.jasrep.2018.06.037
- 69Guo, JJ, Shen, DF, Lin, S, Huang, JC, Liu, KC and Lie, WN. 2016. ‘A specular reflection suppression method for endoscopic images’. IEEE Multimedia Big Data, 16: 125–128. DOI: 10.1109/BigMM.2016.78
- 70Hartley, R and Zisserman, A. 2003. Multiple view geometry in computer vision. Cambridge: Cambridge University Press. DOI: 10.1017/CBO9780511811685
- 71He, H and Ma, Y. 2013. Imbalanced Learning: foundations, algorithms and applications. New Jersey: Wiley. DOI: 10.1002/9781118646106
- 72He, K, Girschick, R and Dollár, P. 2018. Rethinking ImageNet pre-training. arXiv. Available online:
https://arxiv.org/pdf/1811.08883 . - 73He, K, Zhang, X, Ren, S and Sun, J. 2016. ‘Deep residual learning for image recognition’. IEEE Computer Vision and Pattern Recognition, 2016: 770–778. DOI: 10.1109/CVPR.2016.90
- 74Huang, G, Liu, Z, Van der Maaten, L and Weinberger, KQ. 2017. ‘Densely connected Convolutional Networks’. IEEE Computer Vision and Pattern Recognition, 2017: 2261–2269. DOI: 10.1109/CVPR.2017.243
- 75Ibrahim, R and Shafiq, MO. 2023. ‘Explainable convolutional neural networks: a taxonomy, review, and future directions’. ACM Computing Suverys, 55(10): 206. DOI: 10.1145/3563691
- 76Jiménez-García, B, Aznarte, J, Abellán, N, Baquedano, E and Domínguez-Rodrigo, M. 2020a. ‘Deep learning improves taphonomic resolution: high accuracy in differentiating tooth marks made by lions and jaguars’. Journal of the Royal Society Interface, 17: 20200446. DOI: 10.1098/rsif.2020.0446
- 77Jiménez-García, B, Aznarte, J, Abellán, N, Baquedano, E and Domínguez-Rodrigo, M. 2020b. ‘Corrigendum to Deep learning improves taphonomic resolution: high accuracy in differentiating tooth marks made by lions and jaguars’. Journal of the Royal Society Interface, 17: 20200782. DOI: 10.1098/rsif.2020.0782
- 78Kaehler, A and Bradski, G. 2017. Learning OpenCV3: Computer Vision in C++ with the OpenCV Library. Tokyo: O’Reilly
- 79Kornblith, S, Shlens, J, Le, QV. 2018. ‘Do better ImageNet models transfer better?’ Computer Vision and Pattern Recognition, 2656–2666. DOI: 10.1109/CVPR.2019.00277
- 80Kothari, S, Phan, JH, Stokes, TH, Osunkoya, AO, Young, AN and Wang, MD. 2014. ‘Removing batch effects from histopathological images for enhanced cancer diagnosis’. IEEE Journal of Biomedical Health Informatics, 18(3): 765–772. DOI: 10.1109/JBHI.2013.2276766
- 81Koziarski, M and Cyganek, B. 2018. ‘Impact of low resolution on image recognition with deep neural networks’. International Journal of Applied Mathematics and Computer Science, 28(4): 735–744. DOI: 10.2478/amcs-2018-0056
- 82Krizhevsky, A. 2010. Convolutional deep belief networks on CIFAR-10. Technical report, University of Toronto, Toronto, Canada.
- 83Krizhevsky, A and Hinton, G. 2009. Learning multiple layers of features from tiny images. Technical report, University of Toronto, Toronto, Canada.
- 84Krizhevsky, A, Sutskever, I and Hinton, G. 2012. ‘ImageNet classification with deep convolutional networks’. Advances in Neural Information Processing Systems, 25: 1–9.
- 85Kurakin, A, Goodfellow, IJ and Bengio, S. 2017. ‘Adversarial examples in the physical world’. International Conference of Learning Representations. Available at:
https://arxiv.org/pdf/1607.02533.pdf [Last accessed 27/08/2023]. - 86Kuzin, A, Fattakhov, A, Kibardin, I, Iglovikov, VI and Dautov, R. 2018. ‘Camera model identification using convolutional neural networks’. arXiv. Available at:
https://arxiv.org/pdf/1810.02981.pdf [Last accessed 31/08/2023]. - 87Leach, R. 2011. Optical measurement of surface topography. The Netherlands: Springer. DOI: 10.1007/978-3-642-12012-1
- 88LeCun, Y, Bengio, Y and Hinton, G. 2015. ‘Deep Learning’. Nature, 521: 436–444. DOI: 10.1038/nature14539
- 89LeCun, Y, Bottou, L, Bengio, Y and Haffner, P. 1998. ‘Gradient-based learning applied to document recognition’. Proceedings of the IEEE, 86(11): 2278–2324. DOI: 10.1109/5.726791
- 90Lent, M, Fisher, W and Mancuso, M. 2004. ‘An explainable artificial intelligence system for small-unit tactical behaviour’. In: Proceedings of the 16th Conference of Innovated Applications of Artificial Intelligence. pp. 900–907
- 91Li, J and Liu, Z. 2018. ‘Efficient camera self-calibration method for remote sensing photogrammetry’. Optics Express, 26(11): 14213–14231. DOI: 10.1364/OE.26.014213
- 92Lowe, DG. 2004. ‘Distinctive image features from scale-invariant keypoints’. International Journal of Computer Vision, 60: 91–110. DOI: 10.1023/B:VISI.0000029664.99615.94
- 93Madry, A, Makelov, A, Schmidt, L, Tsipras, D and Vladu, A. 2019. ‘Towards deep learning models resistant to adversarial attacks’. International Conference on Learning Representations. Online:
https://arxiv.org/pdf/1706.06083.pdf [Last accessed 27/08/2023]. - 94Malassé, AD, Moigne, AM, Singh, M, Calligaro, T, Karir, B, Gaillard, C, Kaur, A, Bharwaj, V, Pal, S, Abdessadok, S, Sao, CC, Gargani, J, Tudryn, A and Sanz, MG. 2016. ‘Intentional cut marks on bovid from the Quranwala zone, 2.6 Ma, Siwalik Frontal Range, Northwestern India’. Comptes Rendus Palevol, 15(3–4): 317–339. DOI: 10.1016/j.crpv.2015.09.019
- 95Martín-Viveros, JI and Ollé, A. 2020. ‘Use-wear and residue mapping on experiment chert tools. A multiscalar approach combining digital 3D, optical, and scanning electron microscopy’. Journal of Archaeological Science: Reports, 30: 102236. DOI: 10.1016/j.jasrep.2020.102236
- 96McPherron, SP, Alemseged, Z, Marean, CW, Wynn, JG, Reed, D, Geraads, D, Bobe, R and Béarat, H. 2010. ‘Evidence for stone-tool-assisted consumption of animal tissues before 3.39 million years ago at Dikika, Ethiopia’. Nature Letters, 466: 857–860. DOI: 10.1038/nature09248
- 97McPherron, SP, Archer, W, Otárola-Castillo, ER, Torquato, MG and Keevil, TL. 2022. ‘Machine learning, bootstrapping, null models, and why we are still not 100% sure which bone surface modifications were made by crocodiles’. Journal of Human Evolution, 164: 103071. DOI: 10.1016/j.jhevol.2021.103071
- 98Mittal, A, Moorthy, AK and Bovik, AC. 2012. ‘No-reference image quality assessment in the spatial domain’. IEEE Transactions on Image Processing, 21(12): 4695–4708. DOI: 10.1109/TIP.2012.2214050
- 99Moclán, A, Domínguez-Rodrigo, M, Huguet, R, Pizzaro-Monzo, M, Arsuaga, JL, Pérez-González, A, Baquedano, E. 2024. ‘Deep learning identification of anthropogenic modifications on a carnivore remain suggests use of hyena pelts by Neanderthals in the Navalmaíllo rock shelter (Pinilla del Valle, Spain)’. Quaternary Science Reviews, 329: 108560. DOI: 10.1016/j.quascirev.2024.108560
- 100Morgand, A and Tamaazousti, M. 2014. ‘Generic and real-time detection of specular reflections in images’. IEEE Computer Vision Theory and Applications, 14: 274–282. DOI: 10.5220/0004680102740282
- 101Njau, J and Blumenschine, RJ. 2006. ‘A diagnosis of crocodile feeding traces on larger mammal bone, with fossil examples from the Plio-Pleistocene Olduvai Basin, Tanzania’. Journal of Human Evolution, 50(2): 142–162. DOI: 10.1016/j.jhevol.2005.08.008
- 102Nguyen, A, Yosinski, J and Clune, J. 2015. ‘Deep neural networks are easily fooled: high confidence predictions for unrecognizable images’. IEEE Computer Vision and Pattern Recognition, 2015: 427–436. DOI: 10.1109/CVPR.2015.7298640
- 103Njau, J and Gilbert, H. 2016. ‘Standardizing terms for crocodile-induced bite marks on bone surfaces in light of the frequent bone modification equifinality found to result from crocodile feeding behaviour, stone tool modification, and trampling’. FOROST (Forensic Osteology) Occasional Publications, 3: 1–13
- 104Oliver, JS. 1984.
‘Analogues and site context: bone damages from shield trap cave (24CB91), Carbon County, Montana, USA’ . In: Bonischen, R and Sorg, MH (Eds.) Bone Modification. Maine: University of Maine Press. 61–72. - 105Pante, MC, Scott, RS, Blumenschine, RJ and Capaldo, SD. 2015. ‘Revalidation of bone surface modification models for inferring fossil hominin and carnivore feeding interactions’. Quaternary International, 355: 164–168. DOI: 10.1016/j.quaint.2014.09.007
- 106Pech-Pacheco, JL, Cristóbal, G, Chamorro-Martínez, J and Fernández-Valdivia, J. 2000. ‘Diatom autofocusing in brightfield microscopy: a comparative study’. Proceedings of the 15th International Conference on Pattern Recognition. DOI: 10.1109/ICPR.2000.903548
- 107Pertuz, S, Puig, D and Garcia, MA. 2013. ‘Analysis of focus measure operators for shape-from-focus’. Pattern Recognition, 46: 1415–1432. DOI: 10.1016/j.patcog.2012.11.011
- 108Pickering, TR. 2013. Rough and Tumble: Aggression, hunting and human evolution. California: University of California Press. DOI: 10.1525/9780520955127
- 109Pineda, A, Cáceres, I, Saladié, P, Huguet, R, Morales, JI, Rosas, A and Vallverdú, J. 2019. ‘Tumbling effects on bone surface modifications (BSM): An experimental application on archaeological deposits from the Barranc de la Boella site (Tarragona, Spain)’. Journal of Archaeological Science, 102: 35–47. DOI: 10.1016/j.jas.2018.12.011
- 110Pineda, A, Courtenay, LA, Téllez, E and Yravedra, J. 2023. ‘An experimental approach to the analysis of altered cut marks in archaeological contexts from Geometric Morphometrics’. Journal of Archaeological Science: Reports, 48: 103850. DOI: 10.1016/j.jasrep.2023.103850
- 111Pineda, A, Saladié, P, Vergès, JM, Huguet, R, Cáceres, I and Vallverdú, J. 2014. ‘Trampling versus cut marks on chemically altered surfaces: An experimental approach and archaeological application at the Barranc de la Boella site (la Canonja, Tarragona, Spain)’. Journal of Archaeological Science, 50: 84–93. DOI: 10.1016/j.jas.2014.06.018
- 112Pizarro-Monzo, M and Domínguez-Rodrigo, M. 2020. ‘Dynamic modification of cut marks by trampling: temporal assessment through the use of mixed-effect regressions and Deep learning methods’. Archaeological and Anthropological Sciences, 12: 4. DOI: 10.1007/s12520-019-00966-6
- 113Pizarro-Monzo, M, Organista, E, Cobo-Sánchez, L, Baquedano, E and Domínguez-Rodrigo, M. 2022. ‘Determining the diagenetic paths of archaeofaunal assemblages and their palaeoecology through artificial intelligence: an application to Oldowan sites from Olduvai Gorge (Tanzania)’. Journal of Quaternary Science, 37(3): 543–557. DOI: 10.1002/jqs.3385
- 114Pizarro-Monzo, M, Rosell, J, Rufá, A, Rivals, F and Blasco, R. 2023. ‘A Deep learning-based taphonomical approach to distinguish the modifying agent in the Late Pleistocene site of Toll Cova (Barcelona, Spain)’. Historical Biology, DOI: 10.1080/08912963.2023.2242370
- 115Pobiner, BL. 2020. ‘The zooarchaeology and paleoecology of early hominin scavenging’. Evolutionary Anthropology, 29: 68–82. DOI: 10.1002/evan.21824
- 116Raghu, M, Zhang, C, Kleinberg, J and Bengio, S. 2019. ‘Transfusion: understanding transfer learning for medical imaging’. Neural Information Processing Systems, 33: 1–11
- 117Ray, I, Raipuria, G and Singhal, N. 2022. ‘Rethinking ImageNet Pre-training for computational histopathology’. IEEE Engineering in Medicine and Biology Society, 44: 3059–3062. DOI: 10.1109/EMBC48229.2022.9871687
- 118Redmon, J, Divvala, S, Girshick, R and Farhadi, A. 2016. ‘You only look once: Unified, real-time object detection’. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 779–788. DOI: 10.1109/CVPR.2016.91
- 119Reeves, NM. 2009. ‘Taphonomic effects of vulture scavenging’. Journal of Forensic Sciences, 54(3): 503–528. DOI: 10.1111/j.1556-4029.2009.01020.x
- 120Roberts, M, Driggs, D, Thorpe, M, Gilbey, J, Yeung, M, Ursprung, S, Aviles-Rivero, AI, Etmann, C, McCague, C, Beer, L, Weir-McCall, JR, Teng, Z, Gkrania-Klotsas, E, Rudd, JHF, Sala, E and Schönlieb, CB. 2021. ‘Common pifalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans’. Nature Machine Intelligence, 3: 199–217. DOI: 10.1038/s42256-021-00307-0
- 121Rosebrock, A. 2021. Practical Python and OpenCV. Toronto: PyImageSearch.
- 122Ruiz de Oña, E, Barbero-Garcia, I, González-Aguilera, D, Remondino, F, Rodríguez-Gonálvez, P and Hernández-López, D. 2023. ‘PhotoMatch: An open source tool for multi-view and multi-modal feature based image matching’. Applied Sciences, 113(9): 5467. DOI: 10.3390/app13095467
- 123Sahle, Y, El Zaatari, S and White, TD. 2017. ‘Hominid butchers and biting crocodiles in the African Plio-Pleistocene’. Proceedings of the National Academy of Sciences, 114: 13164–13169. DOI: 10.1073/pnas.1716317114
- 124Santoro, A, Barunov, S, Botvinick, M, Wierstra, D and Lillicrap, T. 2016. ‘Meta-learning with memory augmented neural networks’. Proceedings of the International Conference on Machine Learning, 33: 1–9.
- 125Schenk, T. 1999. Digital Photogrammetry. Marcombo: The Ohio State University.
- 126Selvaggio, MM. 1998. ‘Concerning the three stage model of carcass processing at FLK Zinjanthropus: a reply to Capaldo’. Journal of Human Evolution, 35: 313–315. DOI: 10.1006/jhev.1998.0241
- 127Selvaraju, RR, Cogswell, M, Das, A, Vedantam, R, Parikh, D and Batra, D. 2019. Grad-CAM: Visual explanations from deep networks via Gradient-based localization. Available at:
https://arxiv.org/pdf/1610.02391.pdf [Last accessed 30/08/2023]. - 128Selvaraju, RR, Das, A, Vedantam, R, Cogswell, M, Parikh, D and Batra, D. 2017. Grad-CAM: Why did you say that? Available at:
https://arxiv.org/pdf/1611.07450.pdf [Last accessed 30/08/2023]. - 129Shanmugavel, AB, Ellappan, V, Mahendran, A, Subramanian, M, Lakshmanan, R, Mazzara, M. 2023. ‘A novel ensemble based reduced overfitting model with convolutional neural network for traffic sign recognition system’. Electronics, 12(4): 926. DOI: 10.3390/electronics12040926
- 130Shipman, P. 1988.
‘Actualistic studies of animal resources and hominid activities’ . In: Olsen, SL (Ed.), Scanning Electron Microscopy in Archaeology. Oxford: BAR International Series 452. 261–285. - 131Shipman, P and Rose, J. 1983. ‘Evidence of butchery and hominid activities at Torralba and Ambrona: an evaluation using microscopic techniques’. Journal of Archaeological Science, 10(5): 465–474. DOI: 10.1016/0305-4403(83)90061-4
- 132Shipman, P and Walker, A. 1989. ‘The costs of becoming a predator’. Journal of Human Evolution, 18: 373–392. DOI: 10.1016/0047-2484(89)90037-7
- 133Simonyan, K and Zisserman, A. 2015. ‘Very deep convolutional networks for large-scale image recognition’. International Conference on Learning Representations, 3: 1–14. Available online:
https://arxiv.org/pdf/1409.1556.pdf [Last accessed 28/08/2023]. - 134Srinivasa-Desikan, B. 2018. Natural Language Processing and Computational Linguistics. Birmingham: Packt
- 135Su, J, Vargas, DV and Sakurai, K. 2019. ‘One pixel attack for fooling Deep neural networks’. IEEE Transactions on Evolutionary Computation, 23(5): 828–841. DOI: 10.1109/TEVC.2019.2890858
- 136Szegedy, C, Liu, W, Jia, Y, Sermanet, P, Reed, S, Anguelov, D, Ehran, D, Vanhoucke, V and Rabinovich, A. 2015. ‘Going deeper with convolutions’. IEEE Computer Vision and Pattern Recognition, 2015: 1–9. DOI: 10.1109/CVPR.2015.7298594
- 137Szegedy, C, Vanhoucke, V, Ioffe, S, Shlens, J and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. IEEE Computer Vision and Pattern Recognition, 2016: 2818–2826. DOI: 10.1109/CVPR.2016.308
- 138Szegedy, C, Zaremba, W, Sutskever, I, Bruna, J, Erhan, D, Goodfellow, I and Fergus, R. 2014. ‘Intriguing properties of neural networks’. International Conference of Learning Representations. Available online:
https://arxiv.org/pdf/1312.6199.pdf [Last accessed 26/08/2023]. - 139Tan, M and Le, QV. 2020. ‘EfficientNet: Rethinking model scaling for convolutional neural networks’. Proceedings of the International Conference on Machine Learning, 36: 1–11. Available online:
https://arxiv.org/pdf/1905.11946.pdf [Last accessed 30/10/2023]. - 140Tian, J and Chen, Li. 2012. ‘Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure’. Signal Processing, 92: 2137–2146. DOI: 10.1016/j.sigpro.2012.01.027
- 141Tuama, A, Comby, F and Chaumont, M. 2016. ‘Camera model identification with the use of deep convolutional neural networks’. In: Proceedings of the IEEE International Workshop on Information Forensics and Security. 1–16. DOI: 10.1109/WIFS.2016.7823908
- 142Valtierra, N, Courtenay, LA and López-Polín, L. 2020. ‘Microscopic analyses of the effects of mechanical cleaning interventions on cut marks’. Archaeological and Anthropological Sciences, 12: 193. DOI: 10.1007/s12520-020-01153-8
- 143Vedantam, R, Lin, X, Batra, T, Zitnick, CL and Parikh, D. 2015. ‘Learning common sense through visual abstraction’. IEEE International Conference on Computer Vision, 2542–2550. DOI: 10.1109/ICCV.2015.292
- 144Vegara-Riquelme, M, Gidna, A, Uribelarrea del Val, D, Baquedano, E and Domínguez-Rodrigo, M. 2023. ‘Reassessing the role of carnivores in the formation of FLK North 3 (Olduvai Gorge, Tanzania): A pilot taphonomic analysis using Artificial Intelligence Tools’. Journal of Archaeological Science: Reports, 47: 103736. DOI: 10.1016/j.jasrep.2022.103736
- 145Vinyals, O, Toshev, A, Bengio, S and Erhan, D. 2015. ‘Show and Tell: A Neural Image Caption Generator’. IEEE Computer Vision and Pattern Recognition Conference, 2015: 3156–3164. DOI: 10.1109/CVPR.2015.7298935
- 146Wang, Z, Sheikh, HR and Bovik, AC. 2002. ‘No-reference perceptual quality assessment of JPEG compressed images’. International Conference on Image Processing, 477–480. DOI: 10.1109/ICIP.2002.1038064
- 147Wolpert, DH. 1992. ‘Stacked Generalization’. Neural Networks, 5: 241–259. DOI: 10.1016/S0893-6080(05)80023-1
- 148Yezzi-Woodley, K, Terwilliger, A, Li, J, Chen, E, Tappen, M, Calder, J and Olver, P. 2024. ‘Using machine learning on new feature sets extracted from three-dimensional models of broken animal bones to classify fragments according to break agent’. Journal of Human Evolution, 187: 103495. DOI: 10.1016/j.jhevol.2024.103495
- 149Yokinski, J, Clune, J, Bengio, Y and Lipson, H. 2014. ‘How transferable are features in deep neural networks’. Advances in Neural Information Processing Systems, 27: 1–14.
- 150Zhang, C and Bastian, J, Shen, C, Hengel, A and Shen, T. 2013. ‘Extended depth-of-field via focus stacking and graph cuts’. IEEE International Conference on Image Processing, 1272–1276. DOI: 10.1109/ICIP.2013.6738262
- 151Zhou, ZH. 2012. Ensemble Methods. New York: Chapman & Hall. DOI: 10.1201/b12207
- 152Zitová, B and Flusser, J. 2003. ‘Image registration methods. A survey’. Image Visual Computation, 21: 977–1000. DOI: 10.1016/S0262-8856(03)00137-9
- 153Zou, WWW and Yuen, PC. 2012. ‘Very low face recognition problem’. IEEE Transaction on image processing, 21(1): 327–340. DOI: 10.1109/TIP.2011.2162423
