Have a personal or library account? Click to login
The Arch-I-Scan Project: Artificial Intelligence and 3D Simulation for Developing New Approaches to Roman Foodways Cover

The Arch-I-Scan Project: Artificial Intelligence and 3D Simulation for Developing New Approaches to Roman Foodways

Open Access
|Aug 2022

Figures & Tables

jcaa-5-1-92-g1.png
Figure 1

Accuracy of object detection software against the human eye. ILSVRC = Large Scale Visual Recognition Challenge (Source: Tyukin et al. 2018, figure 1).

jcaa-5-1-92-g2.jpg
Figure 2

Daniël van Helden photographing a near complete terra sigillata vessel, with a complete profile, in the antiquarian collection of the Museum of London (Photo: Victoria Szafara).

jcaa-5-1-92-g3.jpg
Figure 3

Daniël van Helden assisting volunteers, Imogen Lucas, Paul Minhoff, and Robert Hunter, to photograph sherds from the MOLA collection (Photo: Victoria Szafara).

jcaa-5-1-92-g4.png
Figure 4

Drawings of the range of Dragendorff forms from which our classes were constructed (Source: Webster 1996, Figures 20, 21, 22, 24, 25, 26, 30, 31, 32 and 36, reproduced with permission from Peter Webster).

Table 1

Table giving the exact number of vessels per class.

CLASSNO. OF VESSELS
Dr1851
Dr24–258
Dr2728
Dr2918
Dr3313
Dr3512
Dr3615
Dr379
Dr388
jcaa-5-1-92-g5.png
Figure 5

Histogram detailing the number of vessels per class in our dataset.

jcaa-5-1-92-g6.jpg
Figure 6

Examples of images of the three simulated datasets and of the photographed real vessels of our nine different Dr classes. From left to right: matplotlib, blender1, blender2, real vessels. (Photos of real vessels taken by the Arch-I-Scan Project with permission from the Museum of London).

jcaa-5-1-92-g7.png
Figure 7

Plot showing accuracy results for the four architectures considered, after final training with photos of real vessels: Inception v3 (green), Resnet50 v2 (blue), Mobilenet v2 (orange) and VGG19 (red). The different pre-training regimes considered are labelled on the x-axis. On the vertical access 1.0 = 100%. Each point indicates the average accuracy over the 20 splits with its two-sigma error band drawn as a line. (NB. These networks were compared under the very specific conditions of our experiment. It was not intended as a comparison between networks and should not necessarily be taken as such).

jcaa-5-1-92-g8.png
Table 2

Confusion matrix for the neural network architecture Inception v3 trained without pre-training with simulated images. Diagonal (green) elements are the correct identifications. Major instances of confusion are those off-diagonal (in red) elements which exceed twice the expected probability if the incorrect identifications would have been uniformly distributed across the classes.

jcaa-5-1-92-g9.png
Table 3

Confusion matrix for the neural network architecture Inception v3 trained with pre-training with the blender2 dataset. Diagonal (green) elements are the correct identifications. Major instances of confusion are those off-diagonal (red) elements which exceed twice the expected probability if the incorrect identifications would have been uniformly distributed across the classes.

DOI: https://doi.org/10.5334/jcaa.92 | Journal eISSN: 2514-8362
Language: English
Submitted on: Apr 19, 2022
|
Accepted on: Jul 7, 2022
|
Published on: Aug 17, 2022
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2022 Daniël van Helden, Evgeny Mirkes, Ivan Tyukin, Penelope Allison, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.