
Figure 1
Accuracy of object detection software against the human eye. ILSVRC = Large Scale Visual Recognition Challenge (Source: Tyukin et al. 2018, figure 1).

Figure 2
Daniël van Helden photographing a near complete terra sigillata vessel, with a complete profile, in the antiquarian collection of the Museum of London (Photo: Victoria Szafara).

Figure 3
Daniël van Helden assisting volunteers, Imogen Lucas, Paul Minhoff, and Robert Hunter, to photograph sherds from the MOLA collection (Photo: Victoria Szafara).

Figure 4
Drawings of the range of Dragendorff forms from which our classes were constructed (Source: Webster 1996, Figures 20, 21, 22, 24, 25, 26, 30, 31, 32 and 36, reproduced with permission from Peter Webster).
Table 1
Table giving the exact number of vessels per class.
| CLASS | NO. OF VESSELS |
|---|---|
| Dr18 | 51 |
| Dr24–25 | 8 |
| Dr27 | 28 |
| Dr29 | 18 |
| Dr33 | 13 |
| Dr35 | 12 |
| Dr36 | 15 |
| Dr37 | 9 |
| Dr38 | 8 |

Figure 5
Histogram detailing the number of vessels per class in our dataset.

Figure 6
Examples of images of the three simulated datasets and of the photographed real vessels of our nine different Dr classes. From left to right: matplotlib, blender1, blender2, real vessels. (Photos of real vessels taken by the Arch-I-Scan Project with permission from the Museum of London).

Figure 7
Plot showing accuracy results for the four architectures considered, after final training with photos of real vessels: Inception v3 (green), Resnet50 v2 (blue), Mobilenet v2 (orange) and VGG19 (red). The different pre-training regimes considered are labelled on the x-axis. On the vertical access 1.0 = 100%. Each point indicates the average accuracy over the 20 splits with its two-sigma error band drawn as a line. (NB. These networks were compared under the very specific conditions of our experiment. It was not intended as a comparison between networks and should not necessarily be taken as such).

Table 2
Confusion matrix for the neural network architecture Inception v3 trained without pre-training with simulated images. Diagonal (green) elements are the correct identifications. Major instances of confusion are those off-diagonal (in red) elements which exceed twice the expected probability if the incorrect identifications would have been uniformly distributed across the classes.

Table 3
Confusion matrix for the neural network architecture Inception v3 trained with pre-training with the blender2 dataset. Diagonal (green) elements are the correct identifications. Major instances of confusion are those off-diagonal (red) elements which exceed twice the expected probability if the incorrect identifications would have been uniformly distributed across the classes.
