Have a personal or library account? Click to login
Moving Object Detection: A New Method Combining Background Subtraction, Fuzzy Entropy Thresholding and Differential Evolution Optimization Cover

Moving Object Detection: A New Method Combining Background Subtraction, Fuzzy Entropy Thresholding and Differential Evolution Optimization

Open Access
|Mar 2025

Figures & Tables

Fig. 1.

General flow chart of the proposed method
General flow chart of the proposed method

Fig. 2.

The first row shows the original images, the second row shows the generated difference images
The first row shows the original images, the second row shows the generated difference images

Fig. 3.

Change in intensity of the input image along the x-axis relative to the output image along the y-axis, when gamma is less than or equal to 1 and b gamma is greater than or equal to 1
Change in intensity of the input image along the x-axis relative to the output image along the y-axis, when gamma is less than or equal to 1 and b gamma is greater than or equal to 1

Fig. 4.

The first row shows the original images, and the post-processing scheme improves the difference images
The first row shows the original images, and the post-processing scheme improves the difference images

Fig. 5.

Flowchart of segmentation process
Flowchart of segmentation process

Fig. 6.

Fuzzy membership function for n - level segmentation
Fuzzy membership function for n - level segmentation

Fig. 7.

Comparative analysis of our approach with state-of-the-art methods by exploiting specific videos such as “HumanBody1-HB” and “HallAndMonitor-HM” from the SBI2015 dataset. The left-to-right layout shows results for: original, ground truth, DeepBS [27], SC_SOBS [25], SuBSENSE [24], GMM_Zivk [26}, as well as our method. The results for NThr=4 are displayed in this figure
Comparative analysis of our approach with state-of-the-art methods by exploiting specific videos such as “HumanBody1-HB” and “HallAndMonitor-HM” from the SBI2015 dataset. The left-to-right layout shows results for: original, ground truth, DeepBS [27], SC_SOBS [25], SuBSENSE [24], GMM_Zivk [26}, as well as our method. The results for NThr=4 are displayed in this figure

Fig. 8.

Comparative analysis of our approach with state-of-the-art methods by exploiting specific videos such as “SnowFall-SF”, “BusStation-BS” and “Canoe-CE” from the CDnet 2014 dataset. The left-to-right layout shows results for original, ground truth, DeepBS [27], SC_SOBS [25], SuBSENSE [24], GMM_Zivk [26], as well as our method, The results for NThr=4 are displayed in this figure
Comparative analysis of our approach with state-of-the-art methods by exploiting specific videos such as “SnowFall-SF”, “BusStation-BS” and “Canoe-CE” from the CDnet 2014 dataset. The left-to-right layout shows results for original, ground truth, DeepBS [27], SC_SOBS [25], SuBSENSE [24], GMM_Zivk [26], as well as our method, The results for NThr=4 are displayed in this figure

Fig. 9.

Comparative analysis of our approach with state-of-the-art methods by exploiting specific videos such as “Highway-HG” and “Pedestrians-PD” from the CDnet 2014 dataset. The left-to-right layout shows results for original, ground truth, DeepBS [27], SC_SOBS [25], SuBSENSE [24], GMM_Zivk [26], as well as our method. The results for NThr=4 are displayed in this figure
Comparative analysis of our approach with state-of-the-art methods by exploiting specific videos such as “Highway-HG” and “Pedestrians-PD” from the CDnet 2014 dataset. The left-to-right layout shows results for original, ground truth, DeepBS [27], SC_SOBS [25], SuBSENSE [24], GMM_Zivk [26], as well as our method. The results for NThr=4 are displayed in this figure

Fig. 10.

Qualitative Performance of the MOD-BFDO Approach on I_BS_01 (Bootstrap, Moderate Shadows): (a) Original Image, (b) Grayscale Image, (c) Ground Truth, (d) Proposed Approach. The results for NThr=4 are displayed in this figure
Qualitative Performance of the MOD-BFDO Approach on I_BS_01 (Bootstrap, Moderate Shadows): (a) Original Image, (b) Grayscale Image, (c) Ground Truth, (d) Proposed Approach. The results for NThr=4 are displayed in this figure

Fig. 11.

Qualitative Performance of the MOD-BFDO Approach on O_SU_01 (Dynamic background, camouflage, hard shadows.): (a) Original Image, (b) background model, (c) Ground Truth, (d) Proposed Approach. The results for NThr =4 are displayed in this figure
Qualitative Performance of the MOD-BFDO Approach on O_SU_01 (Dynamic background, camouflage, hard shadows.): (a) Original Image, (b) background model, (c) Ground Truth, (d) Proposed Approach. The results for NThr =4 are displayed in this figure

Fig. 12.

Qualitative performance of the MOD-BFDO approach on the “111” synthetic videos from the BMC2012 dataset. This figure shows: (a) the original image, (b) the background model, and (c) the results obtained with the proposed approach. The results displayed correspond to NThr=4
Qualitative performance of the MOD-BFDO approach on the “111” synthetic videos from the BMC2012 dataset. This figure shows: (a) the original image, (b) the background model, and (c) the results obtained with the proposed approach. The results displayed correspond to NThr=4

Comparative assessment of F-measure across four categories using four methods on the LASIESTA dataset_ Each row presents the results for a specific method, while each column displays the average scores for each category

MethodsF-M
I_SII_CAI_BSO_SUOverall
GMM [31]0.83280.82720.369410.72400.6880
GMM_0.9054083200.53300.71000.7450
Zivk [26]
Cuevas [32]0.88050.84400.68090.85680.8155
Our approach0.90890.84150.70210.89380.8390

Mean F-measure and standard deviations for different methods

MethodsMean F-M (µ)Standard Deviation (σ)
MOD-BFDO0.85350.0920
SuBSENSE [24]0.82570.1013
DeepBS [27]0.84900.1296
SC_SOBS [25]0.71580.1306
GMM_Zivk [26]0.66960.1232

Results obtained by the proposed algorithm on the LASIESTA dataset

CategoryREPWCF-MPR
I_SI0.89690.55010.90890.9219
I_CA0.79301.28350.84150.9250
I_BS0.70150.41640.71200.7457
O_SU0.88680.19170.89380.9038
Average0.81950.61040.83900.8741

Z-scores for MOD-BFDO vs other methods

Comparisonz-Score
MOD-BFDO vs SuBSENSE0.498
MOD-BFDO vs DeepBS0.069
MOD-BFDO vs SC_SOBS2.11
MOD-BFDO vs GMM_Zivk2.93

Evaluation of our method on the CDnet 2014

CategoryRESPFPRFNRPWCF-MPR
Baseline0.95770.99110.00210.04230.36340.94090.9432
Bad weather0.89500.99700.00040.10530.52120.88340.8723
Dy. Backg0.88390.99890,00130,23320,61210.90510.9272
Shadow0,87040,99170,00820,12951,66630.87850,8869
Cam. Jitter0.81540,99450,00570,18641,26270.83320.8515
Law. Fram0.76100.99340.00610.24920.90640.68000.6146
Average0.86390.99440.00390.15760.72200.85350.8492

Comparison of Average Frames Per Second (FPS) Across Three Source Video Sequences

MethodsSize of video
320×240352×288720×480
SC_SOBS [25]9.88.73.4
SuBSENSE [24]3.32.81.6
GMM _Zivk [26]21.618.113.8
MOD-BFDO5.54.73.2

Comparative assessment of F-measure in six categories using four methods_ Each row presents results specific to each method; each column displays the average scores in each category

MethodsF-M
BaselineBad weatherDy. BackgShadowCam. JitterLaw FramOverall
DeepBS [27]0.95800.83010.87610.93040.89900.60020.8490
SC_SOBS [25]0.93330.66200.66860.77860.70510.54630.7158
SuB-SENSE[24]0.95030.86190.81770.86460.81520.64450.8257
GMM_Zivk [26]0.83820.74060.63280.73220.56700.50650.6696
MOD-BFDO0.94090.88340.90510.87850.83320.68000.8535

A comparison between our method and some of the most important existing methods on CDnet 2014 dataset

MethodsOverall
Avg. REAvg. PRAvg. PCWAvg. F-M
DeepBS [27]0.83120.87120.63730. 8490
SC_SOBS [25]0.80680.71412.14620.7158
SuBSENSE [24]0.86150.86060.81160.8257
GMM _Zivk [26]0.71550.67221.70520.6696
MOD-BFDO0.86390.84920.722020.8535
DOI: https://doi.org/10.2478/ama-2025-0013 | Journal eISSN: 2300-5319 | Journal ISSN: 1898-4088
Language: English
Page range: 106 - 116
Submitted on: Mar 28, 2024
Accepted on: Sep 25, 2024
Published on: Mar 31, 2025
Published by: Bialystok University of Technology
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Oussama Boufares, Mohamed Boussif, Wajdi Saadaoui, Imed Miraoui, published by Bialystok University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.