Have a personal or library account? Click to login
Modelling cross-sectional tabular data using convolutional neural networks: Prediction of corporate bankruptcy in Poland Cover

Modelling cross-sectional tabular data using convolutional neural networks: Prediction of corporate bankruptcy in Poland

Open Access
|Nov 2021

Figures & Tables

Figure 1

Naive InceptionSource: Own elaboration
Naive InceptionSource: Own elaboration

Figure 2

Optimum InceptionSource: Own elaboration.
Optimum InceptionSource: Own elaboration.

Figure 3

PR and ROC curves for all prediction horizons, test dataSource: Authors’ calculations
PR and ROC curves for all prediction horizons, test dataSource: Authors’ calculations

Figure A1

8 × 8 images, raw data, normalised, validation, one-year horizonSource: Authors’ calculations
8 × 8 images, raw data, normalised, validation, one-year horizonSource: Authors’ calculations

Figure A2

8 × 8 images, outlier-free, normalised, validation, one-year horizonSource: Authors’ calculations
8 × 8 images, outlier-free, normalised, validation, one-year horizonSource: Authors’ calculations

Figure A3

8 × 8 images, outlier-free, power transformation, normalised, validation, one-year horizonSource: Authors’ calculations.u
8 × 8 images, outlier-free, power transformation, normalised, validation, one-year horizonSource: Authors’ calculations.u

Figure A4

20 × 20 images, raw data, normalised, validation, one-year horizonSource: Authors’ calculations
20 × 20 images, raw data, normalised, validation, one-year horizonSource: Authors’ calculations

Figure A5

20 × 20 images, data without outliers, normalised, validation, one-year horizonSource: Authors’ calculations
20 × 20 images, data without outliers, normalised, validation, one-year horizonSource: Authors’ calculations

Figure A6

20 × 20 images, outlier-free, power transformation, normalised, validation, one-year horizonSource: Authors’ calculations
20 × 20 images, outlier-free, power transformation, normalised, validation, one-year horizonSource: Authors’ calculations

Figure A7

Learning curves depending on the variant of preprocessing with early stopping for the MLP model example.Source: Authors’ calculations
Learning curves depending on the variant of preprocessing with early stopping for the MLP model example.Source: Authors’ calculations

Figure A8

Learning curves depending on the variant of preprocessing with early stopping for CNN model examplesSource: Authors’ calculations
Learning curves depending on the variant of preprocessing with early stopping for CNN model examplesSource: Authors’ calculations

MLP and convolutional neural networks: hyperparameters and results for best networks, all horizons

NetworkHorizon (balance)1 year (6.94%)2 years (5.25%)3 years (4.71%)4 years (3.93%)5 years (3.86%)
MLP neural networksArchitecture
dense layer+++++
neurons6060606060
activationtanhtanhtanhtanhtanh
dropout0.40.4 0.40.4
regularisationL1 0.00005L1 0.00005 L2 0.00005L1 0.00005
dense layer+++++
neurons6060606060
activationsigmoidsigmoidsigmoidsigmoidsigmoid
dropout0.40.40.40.40.4
regularisationL1 0.00005L1 0.00005 L2 0.00005L1 0.00005
dense layer+++++
Neurons11111
activationsigmoidsigmoidsigmoidsigmoidsigmoid
early stopping4,8733,9724,8926,8437,243
batch size394394394394394
OptimiserAdamAdamAdamAdamAdam
Results
AUC-PR0.7170.6110.6320.6290.519
overfitting PR0.2380.2140.350.2820.34
AUC-ROC0.9290.8950.90.9090.883
overfitting ROC0.0650.0790.0980.0850.106
8×8 convolutional neural networksArchitecture
inception module++++
#F144 44
#F2in/#F2out1/41/4 1/41/4
#F3in/#F3out1/41/4 1/41/4
#F4out44 44
ActivationReLUReLU ReLUReLU
inception module++++
#F188 88
#F2in/#F2out4/84/8 4/84/8
#F3in/#F3out4/84/8 4/84/8
#F4out88 88
ActivationReLUReLU ReLUReLU
Convolution+
Size 3 × 3
#F 32
Padding same
Stride 1 × 1
Activation ReLU
Convolution+
Size 3 × 3
#F 64
Padding same
Stride 1 × 1
Activation ReLU
Max pooling+
Size 2 × 2
Padding Valid
Stride 2 × 2
Convolution+
Size 1 × 1
#F 32
Stride 1 × 1
Activation ReLU
Average pooling++++
Size2 × 22 × 22 × 22 × 22 × 2
Paddingvalidvalid validvalid
Stride2 × 22 × 22 × 22 × 22 × 2
flatten layer+++++
dense layer+++++
Neurons256256512256256
ActivationReLUReLUsigmoidReLUReLU
Dropout0.40.40.40.4
regularisationL1 0.000025L2 0.00005L2 0.00005L2 0.0001L2 0.00005
dense layer++
neurons 256128
activation sigmoidReLU
dropout
regularisation L2 0.00005L2 0.00005
dense layer+++++
neurons11111
activationsigmoidsigmoidsigmoidsigmoidsigmoid
early stopping418350112220326
batch size394394394394394
optimiserAdamAdamRMSpropAdamAdam
Results
AUC-PR0.5470.3930.3920.3420.306
overfitting PR0.2190.4040.430.4140.392
AUC-ROC0.8850.8280.840.8140.793
overfitting ROC0.070.1470.1360.1560.169
20 × 20 convolutional neural networksearly stopping189157137155239
Results
AUC-PR0.5420.4560.4010.3940.345
overfitting PR0.2330.4470.4890.5760.553
AUC-ROC0.8810.8460.8370.8470.811
overfitting ROC0.0740.1430.1520.1510.173

Hyperparameters and results for the best models, all horizons

ModelHorizon (balance)1 year (6.94%)2 years (5.25%)3 years (4.71%)4 years (3.93%)5 years (3.86%)
Logitλ10.0800.050.03
Results
AUC-PR0.4970.280.2510.2210.27
overfitting PR0.050.0670.0350.070.075
AUC-ROC0.8520.7820.7620.7260.764
overfitting ROC0.0030.0220.030.040.036
Random forestsmax depth121213128
max number of characteristics5258545533
min observations to divide4188105
min observations in the leaf417694
number of trees200200200600100
preprocessingq+np+n-q+p+np+n
Results
AUC-PR0.5350.3370.2840.2920.264
overfitting PR0.4160.4120.670.6030.564
AUC-ROC0.890.840.8290.8170.795
overfitting ROC0.10.13670.1680.1750.192
XGBoostmax depth67554
learning rate η0.080.030.070.10.07
column fraction0.80.810.80.9
raw fraction0.90.90.90.90.8
loss reduction γ350.0842
regularisation L10.4110.40.1
regularisation L20.40.2010.3
preprocessingq+n-q+n-n
estimators median122271209246164
Results
AUC-PR0.5960.4110.390.3920.355
overfitting PR0.3920.520.5980.5710.54
AUC-ROC0.9090.8690.8640.860.835
overfitting ROC0.090.1240.1360.1370.145
DOI: https://doi.org/10.2478/ceej-2021-0024 | Journal eISSN: 2543-6821 | Journal ISSN: 2544-9001
Language: English
Page range: 352 - 377
Published on: Nov 27, 2021
Published by: Faculty of Economic Sciences, University of Warsaw
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2021 Aneta Dzik-Walczak, Maciej Odziemczyk, published by Faculty of Economic Sciences, University of Warsaw
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.