Have a personal or library account? Click to login
Ceramic Fabric Classification of Petrographic Thin Sections with Deep Learning Cover

Ceramic Fabric Classification of Petrographic Thin Sections with Deep Learning

By: Mike Lyons  
Open Access
|Sep 2021

Figures & Tables

jcaa-4-1-75-g1.jpg
Figure 1

Overview map of the northern coast of Honduras showing Guadalupe’s location. Elevation data: ASTGTM Version 3 (NASA et al. 2019).

jcaa-4-1-75-g2.jpg
Figure 2

Selection of pottery examples from Guadalupe. a) Tripod vessel with geometric incisions, punctates, and zoomorphic appliqué lug (PAG-15-683). b) Constricted vessel with wave patterns and zoomorphic appliqué lug (PAG-40-3). c) Rim sherd with geometric paint pattern (PAG-43-3). d) Shallow tripod vessel with zoomorphic appliqué lug and anthropomorphic appliqué supports (PAG-53-29). e) Turtle-shaped ocarina (PAG-43-1). f) Roller stamp with geometric pattern (PAG-120-1). Photo credits: a) and b) F. Fecher, c) K. Engel, d) T. Remsey, e) P. Bayer, and f) M. Lyons.

jcaa-4-1-75-g3.jpg
Figure 3

Examples of the five ceramic fabric types analyzed. Row 1) Fabric a: Inclusion-sparse birefringent, Row 2) Fabric c: Amphibole-rich, Row 3) Fabric d: Common, fine inclusions, Row 4) Fabric e: Microfossil-rich, and Row 5) Fabric w: Poorly sorted angular inclusions. Column a) Exterior surface; scale: in cm, Column b) fresh break; scale: in mm, Column c) close-up of thin section under cross-polarized light; scale: image width = 2.8 mm, and Column d) thin section under cross-polarized light; scale: see image.

jcaa-4-1-75-g4.png
Figure 4

Conceptual schematic of the model architecture in use. Transfer learning combines the ‘frozen’ base layers of the VGG19 model with the additional trainable layers used to ‘tune’ the model to the thin section dataset.

Table 1

Approach A data partition of training, validation, and test image counts per fabric.

FABRICTRAININGVALIDATIONTESTTOTAL
a801011101
c1271616159
d3234041404
e1401818176
w1832323229
total8531071091,069
percent79.79%10.01%10.20%100%
jcaa-4-1-75-g5.png
Figure 5

Training and testing results for VGG19 with Approach A. a) Training and validation accuracy (bottom) and loss (top) per epoch. b) Confusion matrix of test data showing the model’s predicted fabric types (Prediction) vs. actual fabric types (Reference).

jcaa-4-1-75-g6.png
Figure 6

Training and testing results for ResNet50 with Approach A. a) Training and validation accuracy (bottom) and loss (top) per epoch. b) Confusion matrix of test data showing the model’s predicted fabric types (Prediction) vs. actual fabric types (Reference).

Table 2

Approach B data partition of training, validation, and test image counts per fabric.

FABRICTRAININGVALIDATIONTESTTOTAL
a58736101
c1021344159
d2843585404
e921173176
w1531957229
total689852951,069
percent64.45%7.95%27.60%100%
jcaa-4-1-75-g7.png
Figure 7

Training and testing results for VGG19 with Approach B. a) Training and validation accuracy (bottom) and loss (top) per epoch. b) Confusion matrix of test data showing the model’s predicted fabric types (Prediction) vs. actual fabric types (Reference).

jcaa-4-1-75-g8.png
Figure 8

Training and testing results for ResNet50 with Approach B. a) Training and validation accuracy (bottom) and loss (top) per epoch. b) Confusion matrix of test data showing the model’s predicted fabric types (Prediction) vs. actual fabric types (Reference).

Table 3

Summary of results showing the accuracy of fabric predictions for training, validation, and testing images with respect to each combination of data partitioning approach and base model.

APPROACHBASE MODELTRAININGVALIDATIONTEST
AVGG19100%100%99.1%
AResNet50100%99.1%100%
BVGG1999.5%96.6%96.3%
BResnet5099.5%97.7%93.6%
DOI: https://doi.org/10.5334/jcaa.75 | Journal eISSN: 2514-8362
Language: English
Submitted on: Apr 3, 2021
|
Accepted on: Jul 25, 2021
|
Published on: Sep 28, 2021
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2021 Mike Lyons, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.