Have a personal or library account? Click to login
TRI-BCC: Tri-Level Breast Cancer Classification via Transfer Learning Networks with Histopathological Images Cover

TRI-BCC: Tri-Level Breast Cancer Classification via Transfer Learning Networks with Histopathological Images

Open Access
|Nov 2025

Full Article

1.
Introduction

Breast cancer (BC) is an abnormal tumor that originates in the cells of breast tissue. It occurs when these cells begin to multiply uncontrollably, forming a tumor that can invade surrounding tissues or spread to other parts of the body [1], [2]. Although the exact causes of BC are not fully understood, numerous risk factors have been identified, including exposure to high levels of radiation [3]. Common symptoms of BC include skin wrinkles, nipple discharge, persistent breast pain, and the presence of a lump in the breast that may vary in size or shape. Each year, over 2.3 million new cases of BC are reported, making it the most common disease among women globally [4], [5]. Early detection and diagnosis significantly improve survival rates [6].

Medical imaging technologies are essential tools for detecting and diagnosing BC [7]. Machine learning (ML) [8], deep learning (DL) [9], and transfer learning (TL) [10] techniques have transformed the analysis of these imaging modalities, enabling accurate identification of cancerous tissues and differentiation between benign and malignant lesions. Convolutional neural networks (CNN) [11], recurrent neural networks (RNN) [12], and other DL algorithms [13] can automatically detect BC and improve diagnostic accuracy. The integration of DL methods in BC categorization improves diagnostic accuracy, reduces false positives, and supports personalized treatment strategies, ultimately advancing healthcare [7], [14]. The main contributions of this research are summarized as follows:

  • Introduces a novel TL-based tri-level classification network for BC stage classification, effectively handling both benign and malignant classes.

  • Utilizes ABCDE filtering to remove noisy artifacts and enhance the image quality for improved classification accuracy.

  • Incorporates an image augmentation phase to generate additional training data, improving the generalization of the network.

  • Employs the Golden Whale Optimization (GWO) algorithm, a hybrid of Golden-Eagle Optimization (GEO) and Whale Optimization (WHO) algorithms, to achieve precise lesion segmentation and improve detection performance.

The rest of this research is organized as follows. Section 2 reviews the existing works on breast cancer stage categorization. Section 3 presents the proposed model using histopathological images. Section 4 discusses the experimental results and appropriate interpretation of the findings. The conclusion of the proposed model is presented in Section 5.

2.
Literature survey

Advancements in DL and ML have significantly improved BC detection and classification. This literature review highlights various methodologies, their strengths, and limitations in achieving precise and early BC diagnosis.

In 2024, Rahman et al. [15] developed a complex deep CNN that includes DL algorithms such as U-Net and YOLO for automatic recognition and localization of tumors in mammography images. This CNN achieved a high accuracy of 93.0 % using the publicly available MIAS dataset. In 2023, Abunasser et al. [16] used the ImageNet database to train five more specialized networks: InceptionV3, Xception, MobileNet, VGG16, and ResNet50. Each dataset was evaluated using these five pre-trained models along with the proposed DL model. This BCCNN approach achieved a categorization accuracy of 98.28 %.

In 2023, Raza et al. [17] developed DeepBreastCancerNet to distinguish BC. This system consists of 24 layers, including 6 convolutional layers, 9 inception modules, a fully connected layer, and various activation functions. The model achieved a maximum accuracy of 99.35 %. In 2023, Sharmin et al. [6] introduced a hybrid BC detection method that leverages a pretrained ResNet50V2 model along with ensemble-based ML techniques. This method combines the capabilities of DL and ML techniques to identify hidden patterns in complex BC images. The results provide compelling evidence that the hybrid model achieves an impressive accuracy rate exceeding 95 %.

In 2022, Reshma et al. [18] developed an autonomous segmentation technique for Fourier Transform-based separation in CAD systems and automatic morphological operation inputs. This approach improves speed and clarity for pathologists analyzing segmented images. In 2022, Singh et al. [19] designed a hybrid DNN that includes inception and residual blocks by combining multi-level feature maps. Image classification was conducted at multiple magnification levels. The proposed method achieved an accuracy of 96.42 % on the BreakHis dataset and 80.17 % on the BHI dataset.

In 2022, Liu et al. [20] introduced AlexNet-BC for classifying BC pathologies. AlexNet-BC was pre-trained with the ImageNet dataset and further trained with an enhanced dataset. The IDC and UCSB datasets further demonstrate its potential for extension, achieving accuracy rates of 86.31 % and 96.10 %, respectively. In 2022, Mohanakurup et al. [21] designed a composite dilated backbone network for BC detection on histopathological images. The lead backbone feature maps serve as the foundation for object identification in CDBN. These maps progressively supply the subsequent backbone with high-level output features from previous backbones. This CDBN resulted in mAP increases ranging from 1.5 % to 3.0 %.

In 2021, Hirra et al. [22] introduced Pa-DBN-BC to categorize the BC in histopathology images using DBN. Features were extracted from histopathology image patches through supervised fine-tuning and unsupervised pre-training phases, with classification performed using logistic regression. This approach was tested on a histopathology image dataset and achieved an accuracy of 86 %. In 2020, Hameed et al. [23] developed an ensemble DL method for accurately classifying BC into non-tumorous and tumorous categories. Five-fold cross-validation was conducted for each model: fully-trained VGG-16, fine-tuned VGG-16, fully-trained VGG-19, and fine-tuned VGG-19. The VGG networks achieved an overall accuracy of 95.29 % for the carcinoma class.

Despite their high accuracy, existing BC detection models have several limitations. Many rely heavily on pre-trained networks such as VGG16, ResNet, or Inception, which are not fully optimized for breast histopathology images and often require extensive fine-tuning. Additionally, most approaches focus on classification without addressing the need for early-stage detection or multi-stage cancer progression. Segmentation is often performed using traditional or semi-advanced methods, limiting precision in identifying lesions. Furthermore, some methods require complex architectural modifications or ensemble approaches, increasing computational cost and hindering real-time clinical applicability.

3.
Proposed methodology

In this section, a novel TRI-BCC model is introduced, with a TL-based tri-level classification network for BC detection that effectively handles both benign and malignant cases. The overall schematic workflow of the proposed method is shown in Fig. 1.

Fig. 1.

Schematic illustration of the proposed TRI-BCC model.

A.
Image acquisition

The BreakHis dataset is a benchmark dataset for BC recognition using histopathological images. It comprises 7909 images of breast tissue samples, divided into benign (2480) and malignant (5429) cases. The benign category includes subtypes such as Adenosis (AS), Fibroadenoma (FA), Phyllodes tumor (PT), and Tubular adenoma (TA). The malignant (5429) category includes subtypes such as Ductal carcinoma (DC), Lobular carcinoma (LC), Mucinous carcinoma (MC), and Papillary carcinoma (PC). These images are captured at four magnification levels to simulate real-world variability. The dataset supports both binary classification (benign vs. malignant) and multi-class classification for subtype identification. It presents challenges such as class imbalance and magnification variability, which can affect model performance. Table 1 provides a description of the BreakHis dataset with its different classes.

Table 1.

Dataset description of BreakHis with image count.

Class typeSubtypeImage count
BenignAS444
FA1442
PT209
TA385

MalignantDC3450
LC626
MC792
PC561

Total7909

Moreover, the malignant 5429 samples are manually annotated into five stages based on visual patterns observed in histopathological features. The stage-wise distribution was generated dynamically during the classification process rather than being pre-defined in the original dataset. These stage-wise labels are used internally to train the Randomized Decision Tree (RDT) for the final classification process. The stage-wise distribution of malignant subtypes is shown in Table 2.

Table 2.

Stage-wise distribution of malignant subtypes.

Malignant subtypeStage 1Stage 2Stage 3Stage 4Stage 5Total
DC4607909706705603450
LC110160140110106626
MC160205190125112792
PC1051301459091561

Total835128514459958695429
B.
Image denoising

Adaptive Brightness Contrast Dynamic Histogram Equalization (ABCDE) filtering is an image denoising and enhancement method that improves image contrast while minimizing noise. Histogram Equalization improves contrast by reallocating pixel intensities to utilize the full range of the image. The transformation function T(r) is defined in terms of the cumulative distribution function (CDF) of the image's intensity levels, (1) T(r)=(L1)m×ni=0rh(i) T(r) = {{(L - 1)} \over {m \times n}}\sum\limits_{i = 0}^r {h(i)} where L is the number of possible intensity levels, m×n is the image size, h(i) is the histogram count at intensity level i, and r is the input intensity. The adjustment factor α is applied to control the stretching of intensity levels, (2) I=(Iμ)+μ I' = \propto (I - \mu) + \mu where I′ is the new intensity value, I is the original intensity, μ is the local mean brightness, α is the contrast adjustment factor, typically selected based on the brightness of the region. ABCDE filtering suppresses noise, especially in darker areas by applying adaptive contrast adjustment and dynamically equalizing histograms of breast histopathological images.

C.
Image augmentation

The sample size was increased using targeted augmentation techniques such as rotation, scaling, flipping, and zooming, which were selectively applied to underrepresented classes (benign subtypes) to address class imbalance and enhance dataset diversity. Rotation changes the image orientation, while flipping mirrors the images horizontally or vertically. Zooming involves enlarging or shrinking portions of an image, and scaling adjusts the image size. This targeted augmentation technique allows the model to train on a wider range of input images, reducing overfitting and improving sensitivity and predictive efficiency. For the experiment, all images were resized to a fixed 64×64 pixel resolution to extract RGB values as features. The dataset was then split into training (75 %) and testing (25 %) sets for further analysis of the proposed model.

Table 3 shows the class-wise image distribution in the dataset before and after targeted augmentation. The benign count increased from 2480 to 3680 when 1200 augmented images were added specifically to underrepresented benign classes. The final dataset comprises 9109 images, with 5429 images in the malignant class remaining unchanged to balance the dataset.

Table 3.

Augmentation count after targeted augmentation techniques.

Class typeSubtypeOriginal countAugmented countTotal count
BenignAS444400844
FA144201442
PT209500709
TA385300685

Subtotal248012003680

MalignantDC345003450
LC6260626
MC7920792
PC5610561

Subtotal542905429

Total790909109
D.
Image segmentation

GWO algorithm is a hybrid of the GEO and WHO algorithm for efficient segmentation of histopathological images. This approach combines the global exploration strength of GEO with the local exploitation efficiency of the WHO algorithm, resulting in precise and accurate segmentation boundaries for histopathological images.

Fig. 2 illustrates the workflow of the proposed GWO algorithm for image segmentation. Initially, GEO performs a comprehensive global search, exploring diverse regions of the image to identify potential segmentation boundaries. Subsequently, WHO fine-tunes the identified regions using its spiral and encircling strategies, optimizing the segmentation boundaries with high precision.

Fig. 2.

Flowchart of the proposed GWO algorithm.

(a)
Initialization:

Randomly initialize the positions of candidate solutions (boundaries) for histopathological image segmentation. (3) Xi(0)=Random(Isb),i=1,2,3,,N \matrix{{{X_i}(0) = Random\,({I_{{\rm{sb}}}}),} & {i = 1,2,3, \ldots,N} \cr} where Xi(0) is the initial position of the ith candidate solution, Isb is the image space boundaries and T is the total number of candidate solutions. The search begins with GEO in the initial phase to ensure wide exploration.

(b)
Global exploration:

The GEO algorithm is inspired by the hunting behavior of golden eagles, which alternate between searching for prey from a distance (exploration) and swooping in on their target (exploitation). In GEO, the global search process mimics the broad area scanning of golden eagles looking for prey. (4) X(t+1)=X(t)+R(XbestX(t))+W(XtargetX(t)) X(t + 1) = X(t) + R({X_{{\rm{best}}}} - X(t)) + W({X_{{\rm{target}}}} - X(t)) where X(t) is the current position of the eagle at iteration t, Xbest is the best-known position found, Xtarget is a target prey (i.e., a solution candidate), W is the weight factor controlling the strength of the pull toward the target, and R is a random number between 0 and 1.

(c)
Adaptive transition phase:

As the algorithm progresses, an adaptive switching mechanism gradually increases the influence of WHO while reducing the dominance of GEO. This transition ensures a smooth shift from broad exploration to focused exploitation. The adaptive weight formula is derived as, (5) wg(t)=1tT wg(t) = 1 - {t \over T} where t is the current iteration and T is the total number of iterations. The switching mechanism weights GEO's influence as wg(t) and WOA's influence as 1−wg(t), indicating that the GEO algorithm dominates the early iterations (more exploration), while the WHO algorithm dominates the later iterations (more exploitation).

(d)
Local exploitation:

In the final stages, the WHO algorithm fine-tunes the identified segmentation boundaries by exploiting the best-known solutions. The WHO algorithm incorporates mechanisms for both global search (exploration) and local search (exploitation) through two main strategies: encircling prey and spiral updating. The whales encircle their prey by updating their position relative to the best-known solution (Xbest): (6) Xi(t+1)=XbestA|CXbestXi(t)| {X_i}(t + 1) = {X_{{\rm{best}}}} - A \cdot |C \cdot {X_{{\rm{best}}}} - {X_i}(t)| where A=2a · Randa (with “a” decreasing linearly over time) and C=2·Rand. This helps the candidates converge towards the best-known solution by refining the segmentation boundaries. The WHO algorithm also uses a spiral update to diversify the search by generating a logarithmic spiral around the best solution: (7) X(t+1)=Deglcos(2πl)+Xbest X(t + 1) = D' \cdot {e^{gl}} \cdot \cos (2\pi l) + {X_{{\rm{best}}}} where D=XbestX(t) is the distance between the whale and the optimal solution, g controls the shape of the spiral, and l is a random number between [−1,1]. The spiral movement provides an additional exploratory mechanism to avoid premature convergence, ensuring that the process can escape local optima.

(e)
Objective function:

The objective function for image segmentation was defined using metrics such as intra-class variance to minimize differences within segmented regions, boundary sharpness to precisely define cancerous and non-cancerous areas, and texture preservation to maintain important histological details in the images. The objective function for image segmentation is calculated as, (8) minf(X)=(ω1×Vintra)+(ω2×Bsharp)(ω3×Tpressure) \min \,f(X) = ({\omega_1} \times {V_{{\rm{intra}}}}) + ({\omega_2} \times {B_{{\rm{sharp}}}}) - ({\omega_3} \times {T_{{\rm{pressure}}}}) where Vintra is the intra-class variance, Bsharp is boundary sharpness, Tpressure is a measure of texture preservation, and ω1,ω2,ω3 are weight factors that balance the metrics based on the segmentation. This algorithm ensures robust and accurate segmentation by combining GEO with local exploitation WHO to achieve precise boundary detection for identifying BC through histopathological image analysis.

E.
Tri-level classification

In this section, the tri-class classification in the proposed model involves three hierarchical levels for breast cancer diagnosis. In Level-I, the TRI-BCC differentiates between benign and malignant types. Level-II further classifies the benign and malignant classes into specific subtypes. Level-III focuses on staging malignant tumors (Stages 1 to 5) using a RDT. This stepwise classification improves diagnostic accuracy by combining TL models with structured staging, as is explained below.

(a)
CapsuleNet:

CapsuleNet is designed to capture spatial hierarchies between features using capsules, where each group of neurons represents specific properties of the images. CapsuleNet replaces max-pooling with dynamic routing between capsules to preserve spatial relationships. The length of the capsule vector represents the prospect of the class. Given input xi to a capsule, the output capsule zi is, (9) zi=icijuJ|ι^ {z_i} = \sum\limits_i {{c_{ij}}\widehat {{u_{J|\iota}}}} where cij is the coupling coefficient determined by dynamic routing. uJ|ι^=wijui \widehat {{u_{J|\iota}}} = {w_{ij}} \cdot {u_i} , with wij as the weight matrix. CapsuleNet performs well for breast cancer detection by identifying small morphological changes in histopathological images that may indicate cancer.

(b)
EfficientNet:

EfficientNet relies on the use of reversed bottleneck MBConv. This structure has depth-separable convolutions instead of conventional layers, achieving nearly a k2 factor, where ks indicates the kernel size representing the height and width of the convolutional layers. The activation function in EfficientNet is ReLU. In compound scaling, the compound coefficient μ is applied, and the following rules are derived: (10) depth:𝔻=αμwidth:𝕎=βμresolution:=γμ \matrix{{depth:{\mathbb D} = {\alpha^\mu}} \cr {width:{\mathbb W} = {\beta^\mu}} \cr {resolution:{\mathbb R} = {\gamma^\mu}} \cr}

In (10), the constant variables α,β,γ≥1 are determined based on the compound coefficient μ through grid search. The computational complexity in the convolutional block depends on attributes such as 𝔻, 𝕎2, and ℝ2ν, which contribute to processing in the convolution layers. As the network expands around (α, β2, γ2)φ and (α β2γ2)φ, as shown in (10), the total computational workload of EfficientNet increases. Despite the higher computational burden, using α,β,γ enables faster retrieval of neural features from larger models.

(c)
ShuffleNet:

ShuffleNet is a lightweight CNN designed for low-computation environments. It uses grouped convolutions and channel shuffling to minimize the number of parameters while maintaining efficacy in BC detection. The grouped convolution divides channels into small groups to reduce computation, and channel shuffling ensures information exchange between groups. The resultant feature map Y of a group convolutional layer is, (11) Y=Xw Y = X \cdot w where X is the input tensor and w is the weight tensor. After convolution, the channels are shuffled. ShuffleNet can quickly analyze smaller patches of histopathological images to detect cancerous regions with minimal resources.

(d)
GoogleNet:

GoogleNet, also known as Inception-v1, uses Inception units that allow the network to capture patterns at various scales, making it suitable for histopathological images with both small and large tissue structures. In the Inception module, each block applies 1×1, 3×3, and 5×5 convolutional layers in parallel, followed by concatenation of the results. Global average pooling reduces overfitting by averaging the feature map instead of using dense layers. The inception module is defined as, (12) Y=i=1NFi(X)+bi Y = \sum\limits_{i = 1}^N {{F_i}(X) + {b_i}} where Fi(X) represents convolutions with different filter sizes and bi is the bias term. In breast cancer detection, multi-scale analysis is crucial because cancerous regions vary greatly in size, from individual nuclei to large areas of disorganized tissue.

(e)
MobileNet:

MobileNet is a lightweight DL network developed for mobile and embedded devices. It was designed with depth-wise separable convolutions to reduce the number of parameters while preserving accuracy. The Depth-wise Separable Convolution layer is split into a depth-wise convolution and a point-wise convolutional layer. (13) Y=(XCLdepth)CLpoint Y = (X \cdot C{L_{{\rm{depth}}}}) \cdot C{L_{{\rm{point}}}} where CLdepth applies depth-wise convolution, and CLpoint applies a 1×1 point-wise convolution. MobileNet is useful for analyzing complex tissue structures in clinical settings, providing faster predictions with minimal computation.

(f)
ResNet-101:

ResNet-101 is a deep network that uses residual connections to address the disappearing gradient problem by permitting the network to learn very deep representations. This is essential for detecting subtle features in large histopathological datasets. Each residual block learns a residual function instead of a direct mapping, making optimization easier. The depth of the network allows it to capture intricate patterns in cancerous tissue. (14) Y=F(X,w)+X Y = F(X,w) + X where F(X,w) is the output from a few convolutional layers and X is the input added via a skip connection. ResNet-101 is effective for breast cancer detection because it can extract hierarchical features across multiple layers to identify complex tissue structures.

F.
Randomized Decision Tree

In the context of final stage classification, the RDT is used to predict malignant cancer stages from Stage 1 to 5 based on histopathological features such as cell structure, nuclei shape, size, mitotic count, and tissue patterns. The RDT is a variant of the traditional Decision Tree algorithm that introduces randomness in the splitting criteria. After classifying the BC subtypes, the RDT determines the cancer stage (Stage 1 to 5). The main idea is to build a tree that randomly selects a subset of features (rather than all features) at each split. This randomness reduces the risk of overfitting and allows the tree to explore multiple splitting strategies. The Gini impurity calculates the prospect that a randomly selected sample would be inaccurately recognized if labels were assigned randomly, reflecting the distribution of labels at a given node. This is used to evaluate the quality of a split. (15) Gini=1i=1tcpi2 Gini = 1 - \sum\limits_{i = 1}^{tc} {p_i^2} where tc is the total number of classes, and pi is the proportion of samples in the node belonging to class i. Alternatively, entropy EN measures the uncertainty in a node. A split is chosen to maximize information gain IG, (16) EN=i=1tcpilog2(pi) EN = - \sum\limits_{i = 1}^{tc} {{p_i}{{\log}_2}({p_i})} (17) IGEN(parent)jnjnEN(childj) IG - EN({\rm{parent}}) - \sum\limits_j {{{{n_j}} \over n}EN({\rm{chil}}{{\rm{d}}_j})} where nj is the number of samples in child node j ,and n is the total number of samples in the parent node. If an input vector X reaches a leaf node with multiple training samples, the predicted label is determined by majority voting for classification. (18) y=argmaxipi y = arg\mathop {\max}\limits_i {p_i} where pi is the proportion of samples in the leaf node belonging to class i, and y is the predicted stage of cancer. Each tree independently predicts the cancer stage, and the final prediction is determined by majority voting across all trees. (19) y^=argmaxit=1TI(yt1) \hat y = arg\mathop {\max}\limits_i \sum\limits_{t = 1}^T {I({y_t} - 1)} where T is the total number of trees, and I is an indicator function that returns 1 if the predicted stage yt equals class i and 0 otherwise. At each node, the dataset is split based on specific feature thresholds. The decision tree uses random feature subsets to create diversity and prevent overfitting.

4.
Results and discussion

This section uses Matlab-2020b to implement the experimental results and assess the efficiency of the proposed model. For this experiment, the collected datasets are used as the retrieval dataset, with 75 % used for training and 25 % used for testing. The experimental results of the proposed model use a sample of collected dataset images are visualized in Fig. 3.

Fig. 3.

Experimental results of the proposed TRI-BCC model for BC classification.

Fig. 3 shows a tri-level classification process for BC analysis from collected images. The process starts with input histology images (colum 1), followed by denoising to enhance image clarity (column 2). Augmentation is performed to increase data variability (column 3). The images are then segmented to identify key regions within tissue structures (column 4). Level-I identifies the general category, such as benign or malignant (column 5), while Level-II identifies the specific classes of benign and malignant (column 6). The final output column shows the classified cancer stage of malignant cases (column 7).

A.
Performance analysis

In this section, several performance evaluation metrics, including specificity (SPE), sensitivity (SEN), precision (PRE), accuracy (ACC), and F1 score (F1S), are employed to objectively assess the proposed approach. Additionally, the Dice score (DS) and Jaccard score (JS) are also used for further segmentation evaluation. (20) SPE=TT+F+ SPE = {{{T^ -}} \over {{T^ -} + {F^ +}}} (21) SEN=T+T++F SEN = {{{T^ +}} \over {{T^ +} + {F^ -}}} (22) PRE=T+T++F+ PRE = {{{T^ +}} \over {{T^ +} + {F^ +}}} (23) ACC=T++TTotalnoofsamples ACC = {{{T^ +} + {T^ -}} \over {Total\,no\,of\,samples}} (24) F1S=2(Precision*RecallPrecision+Recall) F1S = 2({{{\rm{Precision}}*{\rm{Recall}}} \over {{\rm{Precision}} + {\rm{Recall}}}}) (25) DS=2T+F++2T++F DS = {{2{T^ +}} \over {{F^ +} + 2{T^ +} + {F^ -}}} (26) JS=T+T++F+F+ JS = {{{T^ +}} \over {{T^ +} + {F^ -} + {F^ +}}} where true positives (T+) and true negatives (T) indicate correctly identified pixels, while false positives (F+) and false negatives (F) represent misclassified pixels.

Table 4 summarizes the performance of BC recognition in distinguishing between benign and malignant tumors (Level-I). The ACC is slightly higher for benign cases (99.25 %) than for malignant cases (98.84 %). The SPE is higher for benign cases (99.01 %), reflecting better false-negative avoidance. SEN shows that malignant tumors are detected more often (99.01 %) than benign tumors (98.73 %). The PRE value is higher for malignant cases (99.27 %) compared to benign cases (98.45 %), indicating fewer false positives. F1S indicates better balance in malignant detection (99.49 %) compared to benign detection (98.18 %).

Table 4.

Efficiency analysis of the proposed TRI-BCC model for Level-I.

ClassesACCSPESENPREF1S
Benign99.2599.0198.7398.4598.18
Malignant98.8498.0699.0199.2799.49
Average99.0498.9598.6999.4998.47

Fig. 4 compares the performance metrics across various benign and malignant classes. Among benign classes, the FA subtype has the highest overall performance, while the TA subtype shows relatively lower scores. In malignant classes, the PC subtype demonstrates strong sensitivity and F1 scores, whereas the MC subtype performs relatively lower across all metrics. These graphs highlight the superior efficiency of the proposed TRI-BCC for classifying various BC subtypes.

Fig. 4.

Level-II analysis of the proposed TRI-BCC model for (a) Benign classes and (b) Malignant classes.

Fig. 5 illustrates the performance of the breast cancer detection model across malignant stages (Stage 1 to 5) by showing trends in performance metrics. Stage 4 exhibits a notable drop in specificity, while other metrics remain relatively stable across stages. Precision peaks at Stage 2 and drops at Stage 4, whereas sensitivity shows an increasing trend after Stage 1. The F1 score remains consistent, reflecting balanced performance particularly at Stage 5.

Fig 5.

Level-III analysis of the proposed TRI-BCC model for Malignant class stages.

The accuracy curve in Fig. 6(a) was generated over 100 epochs at a specified reliability level. Similarly, Fig. 6(b) shows the epochs and loss curve, showing the minimal loss assessed for the proposed model as the number of epochs increases. These findings demonstrate the efficiency of the proposed TRI-BCC model for classifying BC stages with a low error rate.

Fig. 6.

Training and testing curve of the proposed TRI-BCC model (a) Accuracy curve; (b) Loss curve.

B.
Comparative analysis

In this section, the proposed TRI-BCC framework was evaluated alongside existing BC classification models using various metrics. The TRI-BCC model was evaluated against other DL structures with multiple measures.

Table 5 compares the performance of three existing models: Firefly Optimization (FFO), Aquila Optimization (AO), and Bald Eagle Search Optimization (BESO), based on two key segmentation metrics: DS and JS. Both metrics measure the overlap between predicted and actual segmentation, with higher values indicating better efficiency. The proposed GWO algorithm outperforms the others by achieving the highest DS (0.91) and JS (0.87), indicating improved segmentation accuracy.

Table 5.

Comparative analysis of optimization algorithms based on DS and JS.

MetricsFFO [24]AO [25]BESO [26]GWO (ours)
Dice score0.820.850.870.91
Jaccard score0.760.790.810.87

Table 6 compares the efficiency of various classification methods for a particular dataset using ACC, SPE, SEN, PRE, and F1S. Random Forest (RF) demonstrates the highest overall performance, with an accuracy of 96.3 % and strong precision, sensitivity, and specificity. K-Nearest Neighbors (KNN) also performs well, particularly in sensitivity (95.5 %). Decision Tree (DT) and Naive Bayes (NB) show moderate results, with accuracies of 89.5 % and 87.9 %, respectively. The RDT model achieves a balanced performance across all metrics.

Table 6.

Comparative evaluation of different ML classification models.

MethodsACCSPESENPREF1S
NB87.986.888.486.387.2
DT94.794.095.593.194.2
RF96.395.996.895.496.1
KNN89.589.090.288.789.1
RDT99.0498.998.699.498.4

Fig. 7 presents a comparison of segmentation methods applied to histopathology images. The first column displays the original input images, followed by the ground truth segmentation. The subsequent columns show results from various segmentation techniques: FFO [24], AO [25], and BESO [26] algorithms, along with the proposed GWO algorithm. The GWO algorithm produces clearer and more accurate segmentation results by closely aligning with the ground truth, indicating its effectiveness over other techniques.

Fig. 7.

Visual comparison of different optimization algorithms for segmentation.

Table 7 compares various methods for a specific task based on their accuracy rates. The combination of U-Net and YOLO achieves 93.0 % accuracy, while the fine-tuned networks yield an accuracy of 98.28 %. The hybrid deep neural network achieves 96.42 % accuracy, and the Pa-DBN-BC model has the lowest accuracy at 86.0 %. The proposed TRI-BCC model achieves the highest accuracy of 99.06 %, indicating its superior efficiency over previous methods. The proposed TRI-BCC model improves overall accuracy by 6.11 %, 0.78 %, 2.66 %, and 13.18 % compared to U-Net + YOLO, fine-tuned networks, hybrid deep neural network, and Pa-DBN-BC, respectively. This analysis highlights the progression of accuracy improvements with each approach, demonstrating the effectiveness of the proposed TRI-BCC model.

Table 7.

Accuracy comparison: proposed model vs existing models.

AuthorsMethodsAccuracy
Rahman et al.U-Net + YOLO93.00 %
Abunasser et al.Fine-tuned networks98.28 %
Singh et al.Hybrid deep neural network96.42 %
Hirra et al.Pa-DBN-BC86.00 %

Proposed modelTRI-BCC model99.06 %
5.
Conclusion

This work introduces the TRI-BCC model, a highly accurate and efficient approach for classifying BC stages using a tri-level classification framework. Histopathological images are enhanced through ABCDE filtering, followed by optimized segmentation using the GWO algorithm. The TL models include CapsuleNet, EfficientNet, ShuffleNet, GoogleNet, MobileNet, and ResNet to classify cases into benign and malignant categories. Identified benign and malignant conditions undergo further staging using RDT to classify cancer into five distinct stages. The proposed TRI-BCC model demonstrates higher efficiency in classifying benign and malignant cases and further categorizing cancer stages. The competence of the proposed TRI-BCC model was assessed using ACC, SPE, SEN, PRE, F1S, DS, and JS. The proposed TRI-BCC model outperforms current techniques, achieving an accuracy of 99.06 %, making it an efficient tool for improving early detection. The proposed TRI-BCC model improves overall accuracy by 6.11 %, 0.78 %, 2.66 %, and 13.18 % compared to U-Net + YOLO, fine-tuned networks, hybrid deep neural network, and Pa-DBN-BC, respectively. Future work could involve developing custom DL architectures specifically tailored to histopathological image features and exploring hybrid optimization techniques to enhance segmentation accuracy in complex and noisy datasets.

Language: English
Page range: 327 - 337
Submitted on: Dec 11, 2024
Accepted on: Sep 11, 2025
Published on: Nov 13, 2025
Published by: Slovak Academy of Sciences, Institute of Measurement Science
In partnership with: Paradigm Publishing Services
Publication frequency: Volume open

© 2025 Sridevi Rajalingam, Kavitha Maruthai, published by Slovak Academy of Sciences, Institute of Measurement Science
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Volume 25 (2025): Issue 6 (December 2025)