Have a personal or library account? Click to login

A new local tetra pattern in composite planes (LTcP) technique for classifying brain tumors using partial least squares and super-pixel segmentation

Open Access
|Oct 2025

Full Article

I.
Introduction

Uncontrollably aberrant development of brain cells is called a brain tumor. Brain tumors may or may not be invasive. In the worst case, it may result in potentially fatal brain damage. A projected 20,000 people will be victims of primary malignant brain tumors. Magnetic resonance imaging (MRI) data reveal differences in the appearance of distinct brain tumor presentations and classes. For this reason, brain cancers are usually identified and categorized via MRI. MRI helps medical professionals assess malignancies and make therapy plans. The study of these MRI images is crucial and requires much intellectual capacity in this domain. One way to examine these MRI images is manual intervention, whereas other methods involve automatic examination of the acquired images through MRI. The automatic examination performs well in segmenting the images. Thus, segmentation is very important for obtaining accurate and optimal findings from images. There are many segmentation techniques, such as edge-based, pixel-based, and texture-based approaches. One of the significant methods is pixel-based segmentation. Perceptually consistent tiny regions within pixels are formed via superpixel segmentation. Initially, superpixel segmentation was carried out to perform segmentation over segmentation only. It offers incredibly significant units that serve as building blocks for computer vision and image processing algorithms. The main benefit of switching from pixel to superpixel is the lower computing overhead. There are many areas where superpixel segmentation is commonly used, such as in image segmentation and image reconstruction. Another very useful approach is the LTrP. The path of the dominant gray pixel is computed by the LTrP [1] to characterize the dimensional layout of the neighborhood texture. LTrP indicates the correlation through a path of the central pixel and its adjacent pixels. These are determined through the integration of (n-1)-order derivatives from the 0° and 90° points of view. Local derivative pattern (LDP) on the other hand, generates the association among the (n-1)-order derivatives of the middle pixel and adjacent pixels at 0°, 45°, and 90°. Employing the path of the center gray pixel, the LTrP characterizes the dimensional layout of the neighborhood texture [1, 2, 3]. In addition, the partial least squares (PLS) approach is an excellent method for regression analysis and classification techniques because of the robustness of the created model. It is also applicable to many other domains, including process monitoring, marketing analysis, and image processing. PLS regression [4, 5] is a modeling technique that can be used to reduce primitive data and remove redundant information, or noise, between a pair of datasets. However, these approaches result in low approximated output, which cannot be used for deep analysis. Moreover, the number of features extracted by any approach requires greater time complexity. In addition to it, the features used by the conventional methods require a larger number of features to give better results. The MRI images sometimes lack in the required number of features and thus may obstruct accurate analysis. This may lead to severe issues for the patient. Thus, we propose an approach based on a composite plane, where the number of features acquired requires minimal time, and these features are fused together to form optimal fused features using PLS. These optimal fused features, when fed to the classifiers, give better output in terms of accuracy and reduced time complexity. Thus, we a acquire larger number of features and then eventually use a smaller number of optimal fused features at the same time.

II.
Related Work

There are many barriers in brain tumor classification, especially in multimodal datasets. Therefore, the performance of the automated system decreases [10]. The following processes are typically carried out in the classification process. First, feature extraction is performed, and second, classifier-based classification is performed [23, 24]. Feature extraction makes predictions based on shapes, colors, textures, etc [28, 29]. However, the strength of the collected features determines how well classifiers execute. The interest of computer vision researchers has increased with the recent success of deep learning in the medical field. All the retrieved deep learning characteristics are typically not helpful for accurate classification, and they can take a long time to process. Furthermore, there is a great degree of resemblance between certain tumors, such as FLAIR and T2 tumors, which makes classification difficult [30, 31]. In this section, some background ideas that motivated our suggested work are presented.

Valizadeh and Babapour Mofrad [6] researched textured image recovery, and an approach involving the Fourier descriptor and local tetra pattern was suggested. By considering the vertical and horizontal directions that are determined via first-order derivatives, the method establishes an association with the center pixel and its neighbors. The most important aspect of an image is its texture characteristic, which filters out images that are more suited in relation to the input image. Figure 1 shows the architecture of their work. They also investigated the relationship between local ternary patterns (LTP) [7, 32] and the Euclidean distance [8]. They used a dataset of retinal images labeled with grades of macular edema and retinopathy. Based on the mean values of the real and imaginary parts of complex numbers of polar coordinates in the frequency domain, the Fourier transform is utilized to produce the feature vectors. The boundaries and modeling of objects may benefit from the use of shape-based Fourier descriptors.

Figure 1:

Architecture of an LTrP via a Fourier descriptor [6].

Ojala et al. [9] proposed texture classification based on every pixel in an image. Every pixel is generated with the local binary pattern (LBP) code by treating it as the center of a defined neighborhood. The center pixel's gray level value is then compared to that of its neighbors to determine the LBP code. This procedure is formally represented by Eq. (1): (1) LBPn=k=0n2k×f(sesk) {\rm{LB}}{{\rm{P}}^n} = \sum\nolimits_{k = 0}^n {{2^k} \times f({S_e} - {S_k})}

Yang et al. [10] reported that the use of superpixels is an effective method for reducing the number of image fundamentals needed for further processing. Figure 2 shows the overall architecture of their work. The images were first segmented via a fully convolutional network superpixel segmentation, and then the acquired features were input into PSMNet. However, fewer researchers have used them with DNNs. The fact that superpixel renders the conventional convolution process ineffective when used on conventional layouts is one of the primary causes [11]. Guided by a standard initialization technique used by superpixel algorithms, they proposed a new approach that estimates superpixels on a conventional image layout via a DNN [12, 13]. They achieved a speed of more than 50 frames per second and achieved good results over large datasets. Based on the obtained superpixel, they designed a down-sampling/up-sampling strategy to obtain good results.

Figure 2:

Approach used by Yang et al. [10].

Zhao et al. [14] studied the mobility and dynamic texture of each pixel in three hyperplanes using local-temporal data along the x-, y-, and z-axes, known as LBP-TOP. This method combines LBP codes [15, 16] from the three orthogonal planes to form a single LBP-TOP code, considering only co-occurrence statistics in these planes. The center pixel where the three planes intersect is used to compute the LBP codes for all pixels, which are then concatenated into a histogram. The ideal bin size is 3 × 2K, smaller than the 2× 3K + 2 used for LBP. Qiu et al. [17] introduced the local Tetra Pattern (LTrP) by combining LDPs [18, 19], LTP [20, 21], and LBP to represent the directional information of the center pixel (s_e) in static textures, with s_h = 0° and 90° as the first-order derivative at each pixel. (2) M0°1(ge)=M(sh)M(se) M_{{0^\circ}}^1({g_e}) = M({{\rm s_h}}) - M({s_e}) (3) M(ge)=M(sv)M(se) M({g_e}) = M({s_v}) - M({s_e})

The following magnitudes are evaluated at 0° and 90°, where M(Se) represents the gray value at the center pixel, M(Sh) represents the gray value of the horizontal neighbor pixel, and M(Sv) represents the gray value of the vertical neighbor pixel. To evaluate the direction of the center pixel, we perform the operations described below. This is denoted as the first-order derivative MDir1(se) M_{Dir}^1({s_e}) (4) MDir1(se)=1,ifM0°1(se)0andM90°1(se)02,ifM0°1(se)<0andM90°1(se)03,ifM0°1(se)<0andM90°1(se)<04,ifM0°1(se)0andM90°1(se)<0 M_{Dir}^1({s_e}) = \left\{{\matrix{{1,} \hfill & {{\rm{if}}\,{\rm{M}}_{{0^\circ}}^1({{\rm{s}}_{\rm{e}}}) \ge 0\,{\rm{and}}\,{\rm{M}}_{{{90}^\circ}}^1({{\rm{s}}_{\rm{e}}}) \ge 0} \hfill \cr {2,} \hfill & {{\rm{if}}\,{\rm{M}}_{{0^\circ}}^1({{\rm{s}}_{\rm{e}}}) < 0\,{\rm{and}}\,{\rm{M}}_{{{90}^\circ}}^1({{\rm{s}}_{\rm{e}}}) \ge 0} \hfill \cr {3,} \hfill & {{\rm{if}}\,{\rm{M}}_{{0^\circ}}^1({{\rm{s}}_{\rm{e}}}) < 0\,{\rm{and}}\,{\rm{M}}_{{{90}^\circ}}^1({{\rm{s}}_{\rm{e}}}) < 0} \hfill \cr {4,} \hfill & {{\rm{if}}\,{\rm{M}}_{{0^\circ}}^1({{\rm{s}}_{\rm{e}}}) \ge 0\,{\rm{and}}\,{\rm{M}}_{{{90}^\circ}}^1({{\rm{s}}_{\rm{e}}}) < 0} \hfill \cr}} \right.

III
Proposed Work

The proposed work is divided into three phases:

  • A. Segmentation of the acquired image via superpixel segmentation

  • B. Applying novel LTcP to the segmented image to gain features

  • C. PLS is applied to fuse the features, which are applied to different classifiers, and the results are compared.

The images were segmented using superpixel segmentation, followed by feature extraction with the Local Tetra Patterns (LTcP) descriptor based on composite planes (xy, yz, and xz). First-order derivatives were used to detect neighborhood and center pixels. The extracted features were then fused using a filtration method and PLS, reducing the number of features needed for accurate classification and minimizing processing time. Figure 3 illustrates the overall architecture from data collection to results. The phases are numbered from 1 to 4. After segmentation, the features are extracted, and the optimal features are fused. These fused features are fed to the classifiers to give accurate results within reduced computational time.

Figure 3:

Overall architecture of the proposed methodology.

In the next section, we discuss all three phases in detail.

a.
Superpixel segmentation

We employed the simple linear iterative clustering (SLIC) algorithm [21, 22] for superpixel segmentation. Here, we provide a detailed explanation of how it works:

a) Initialization

The image is represented in the CIELAB color space to better represent perceptual differences.

The image is divided into a grid of roughly equal-sized square regions. Each region's center is initialized as a cluster center.

b) Cluster center perturbation

To avoid placing cluster centers on edges, each initial cluster center is moved to the position of the lowest in the 3 × 3 adjacent pixel slope.

c) Assignment step

Each pixel is assigned to the nearest cluster center based on a distance measure that combines color similarity and spatial proximity. The distance measure D is defined as: (5) D=dlab2+dxyS2 D = \sqrt {d_{lab}^2 + {{\left({{{{d_{xy}}} \over S}} \right)}^2}} where dlab is the Euclidean distance in the CIELAB color space, dxy is the Euclidean distance in the image plane, and (S) is a normalization factor related to the superpixel size (Figure 4).

Figure 4:

Illustration (A) Segmented image for the T1 MRI image (B) Segmented image for the T2 MRI image. (C) Segmented image for the FLAIR MRI image. MRI, magnetic resonance imaging.

d) Update step

After all the pixels are assigned to clusters, the cluster centers are updated to be the means of all the pixels assigned to them in both color and spatial dimensions.

e) Iterative refinement

Steps 3 and 4 are repeated until convergence, typically when the cluster centers do not change significantly between iterations.

f) Enforcing connectivity

After convergence, small, isolated regions are merged with the nearest larger superpixel to ensure spatial connectivity.

After segmentation, we introduced a novel descriptor, the local tetra pattern in composite planes (LTcP), to extract the maximum features from segmented images. This method optimizes MRI brain tumor classification by combining pixel-based segmentation with the LTcP methodology. Building on LBPs, LTcP extends texture descriptors by incorporating directional pixel information, enhancing feature extraction in image processing. Below is a detailed explanation of the LTcP algorithm:

b.
LTcP algorithm

a) First-order derivative calculation

  • For each pixel in the image, the first-order derivatives along with the directions are computed.

  • These derivatives capture the intensity changes in the respective directions.

b) Direction encoding

  • For each pixel, the direction of the gradient is encoded based on the sign of the derivative. This results in four directional codes:

  • 0 if the gradient is positive in the 0° direction.

  • 1 if the gradient is positive in the 45° direction.

  • 2 if the gradient is positive in the 90° direction.

  • 3 if the gradient is positive in the 135° direction.

c) Pattern generation

  • The directional codes of neighboring pixels are combined to form a tridirectional pattern for each pixel. This pattern represents the local texture around the pixel.

d) Histogram construction

  • A histogram of the tetra-directional patterns for the entire image or a region of interest is constructed. This histogram serves as the feature vector representing the texture of the image.

e) Feature vector normalization

  • The histogram is normalized to ensure that the feature vector is scale invariant and robust to illumination changes.

This work introduces a novel spatiotemporal descriptor, LTcP, for identifying and characterizing textures in images and videos. LTcP is computed for each center pixel by analyzing coincident statistics across three orthogonal planes (xy, xz, and yz). Histograms from these planes are concatenated to form the final texture descriptor. The method incorporates superpixel segmentation for efficient feature selection while leveraging the spatial and temporal structures of texture patterns.

Extensive experiments on BRATS2019 and BRATS2018 datasets demonstrate LTcP's superior classification accuracy compared to other advanced descriptors. By using first-order derivatives of horizontal and vertical magnitudes, the LTcP descriptor effectively captures local texture details. A flow diagram in Figure 5 outlines the process, showcasing how LTcP extracts a comprehensive set of features, enabling optimal feature selection for efficient and accurate results.

Figure 5:

Use of the LTcP descriptor for feature extraction.

By using Eq. (4), the maximal directions for each center pixel are obtained, which is 4 in this case. Thus, the image is quartered into four parts (directions). The calculation of second-order patterns LTcP2(se) is given by Eq. (6) as: (6) LTcP2(se)={f2(MDir1(se),MDir1(s1)),,f2(MDir1(sp),MDir1(sp))} {\rm{LTc}}{{\rm{P}}^2}({s_e}) = \{{f_2}(M_{Dir}^1({s_e}),M_{Dir}^1({s_1})), \ldots ,{f_2}(M_{Dir}^1({s_p}),M_{Dir}^1({s_p}))\} (7) f2(MDir1(se),MDir1(se))=0,ifMDir1(se)=MDir1(sp)MDir1(sp)otherwise {f_2}(M_{Dir}^1({s_e}),M_{Dir}^1({s_e})) = \left\{{\matrix{{0,} & {{\rm{if}}\,M_{Dir}^1({s_e}) = M_{Dir}^1({s_p})} \cr {{\rm{M}}_{{\rm{Dir}}}^1({s_{\rm{p}}})} & {{\rm{otherwise}}} \cr}} \right. (8) LBPDir=ϕ2=k=1n2k1×f3(LTcP2(se)) {\rm{LBP}}_{Dir = \phi}^2 = \sum\limits_{k = 1}^n {{2^{k - 1}} \times {f_3}({\rm{LTc}}{{\rm{P}}^2}({s_e}))} (9) f3(LTcP2(se))Dir=ϕ=1,ifLTcP2(se)=ϕ0,otherwise {\left. {{f_3}({\rm{LTc}}{{\rm{P}}^2}({s_e}))} \right|_{Dir = \phi}} = \left\{{\matrix{{1,} & {{\rm{if}}\,{\rm{LTc}}{{\rm{P}}^2}({s_e}) = \phi} \cr {0,} & {{\rm{otherwise}}} \cr}} \right. where ϕ = {2, 3, 4} if the direction of the center pixel derived by Eq. (4) is 1. P is the number of neighbors of the center pixel. Similar calculations are made for the other three binary patterns for the remaining three orientations of the center pixels. Consequently, for each pixel, LTrP generates 12 (4 × 3) binary patterns. Using Eq. (6), the next binary pattern is determined as (10) RI1(gp)=(M0°1(sp))2+(M90°1(sp))2 {R_{{I^1}}}({g_p}) = \sqrt {{{(M_{{0^\circ}}^1({S_p}))}^2} + {{(M_{{{90}^\circ}}^1({S_p}))}^2}} (11) LP=k=1n2k1×f4(RI1(sp)RI1(se)/h) {\rm{LP}} = \sum\limits_{k = 1}^n {{2^{k - 1}} \times {f_4}({R_{{I^1}}}({s_p}) - {R_{{I^1}}}({s_e})/h)} (12) f4(x)=1,ifx00,otherwise {f_4}(x) = \left\{{\matrix{{1,} & {{\rm{if}}\,x \ge 0} \cr {0,} & {{\rm{otherwise}}} \cr}} \right.

The magnitude of the center pixel RI1 (Se), neighboring pixels RI1 (Sp), and h for scaling, and the magnitude pattern's decimal value in Eq. 11. Since the feature vector length of the LTrP is 2P, the reckoning cost increases significantly. A consistent pattern system can then assist in lowering this expense [25, 26]. p(p–1)+2 is the number of unique uniform patterns for a given image. Consequently, the picture can be expressed in terms of the LTrP histogram, which is computed as follows. (13) ht(M)=1L1×L2j=1L1k=1L2f5(LTcP(j,k),I), {h_t}(M) = {1 \over {{L_1} \times {L_2}}}\sum\limits_{j = 1}^{{L_1}} {\sum\limits_{k = 1}^{{L_2}} {{f_5}({\rm{LTcP}}(j,k),I),}} (14) M[0,P(P1)+2] M \in [0,\,P(P - 1) + 2] (15) f5(x,y)=1,ifx=y0,otherwise {f_5}(x,y) = \left\{{\matrix{{1,} & {{\rm{if}}\,x = y} \cr {0,} & {{\rm{otherwise}}} \cr}} \right.

L1 × L2 is the size of the images.

Let LTrP(Se,i) be the final LTcP code for the center pixel at index i. Notably, when sc is in the xy plane, i = {1, 2…(n × t)}, when sc is in the yz plane, and when sc is in the xz plane, i = {1, 2…(t × m)}. Finally, we use the LTcPcode values of every frame in the plane to build the histogram (16) Hxy=h(LTcP(se,i)),i={1,2,(m×n)} {H_{xy}} = h({\rm{LTcP}}({s_{e,i}})),\,\forall i = \{1,2, \ldots (m \times n)\} (17) Hyz=h(LTcP(se,i)),i={1,2,(n×t)} {H_{yz}} = h({\rm{LTcP}}({s_{e,i}})),\,\forall i = \{1,2, \ldots (n \times t)\} (18) Hxz=h(LTcP(se,i)),i={1,2,(t×m)} {H_{xz}} = h({\rm{LTcP}}({s_{e,i}})),\,\forall i = \{1,2, \ldots (t \times m)\} (19) Max{Hxy,Hyz,Hxz+b} {\rm{Max}}\{{H_{xy}},{H_{yz}},{H_{xz}} + b\} b is the balance (bias) for reducing the unstable superpixel.

Figure 6 shows the features gained from the above method. Compared with those of the other methods, the features gained were maximized when the LTcP method was used. Table 1 presents a comparison of the results from our method and other available feature extraction methods. Eq. (19) was used to find the maximum number of features.

Figure 6:

Scatter plot of the feature vectors.

Table 1:

Numbers of features extracted using the proposed method

Method (features extraction)Number of features
LTcP (proposed method)42
LTrP [18]35
LBP [15]10
GLCM [26]19
GLRM [27]7

LBP, local binary pattern; LTrP, local Tetra Pattern.

Figure 7 represents the local features gained after LTcp over various sets of MRI images.

Figure 7:

Illustration of (A) local features from T1 MR images and (B) local features from T2 MR images. (C) Local features from the FLAIR MR image.

c.
PLS implementation
(20) R=t=dFtFSw(1i)T=M \vec R = \sum\limits_{t =}^d {{F_t}{F_{Sw}}{{(1i)}^T} = M} (21) s=t=dFtFSw(2i)T=N \vec S = \sum\limits_{t =}^d {{F_t}{F_{Sw}}{{(2i)}^T} = N}

The path between ri; Si is obtained via PLS as follows: (22) {ri;si}=argmaxrTr=sTs=1CovRTr,sTs \{{r_i};{s_i}\} = \mathop {\arg \max}\limits_{{r^T}r = {s^T}s = 1} Cov\left({{R^T}\,\,r,\,{{\overrightarrow S}^T}\,\,s} \right) (23) {ry;stz}=argmaxrTr=sTs=1uTδuvv,fori=1,2,3,ka=1 \{{r_y};{s_t}z\} = \mathop {\arg \max}\limits_{{r^T}r = {s^T}s = 1} {u^T}{\delta_{uv}}v,\,\,for\,\,i = 1,2,3, \ldots k\,\,a = 1 (24) t=1hμtgt(ut)=f=1hμig(rirfOffsett) \sum\limits_{t = 1}^h {{\mu_t}{g_t}({u_t})} = \sum\limits_{f = 1}^h {{\mu_i}g({r_i} \cdot {r_f}Offse{t_t})} (25) μT=E {\mu^T} = E where h is the number of hidden layers, ui is the weight vector of the output, ri is the weight vector of the input, and E is the optimal output. (26) μ^SVM=argminμμTLE {\hat \mu_{{\rm{SVM}}}} = \mathop {\arg \min}\limits_\mu \left\| {{\mu^T}L - E} \right\| where L is the last layer of the hidden layers.

IV.
Results and Discussion

We calculated the accuracy and computation time after every small set of features was obtained. The best accuracy was obtained after 35 features. The method yielded saturated results after accommodating 35 features out of a total of 42 features gained. These features were in the fused format after applying the PLS method (Figures 7 and 8). Thus, accuracy can be achieved even with fewer fused features (Figure 9). These results were produced with different classifiers, as shown in Table 2. We compared the results when different classifiers, such as SVM, naïve Bayes, SoftMax, and decision tree, were used. The comparison was performed on two sets of datasets, BraTS2018 and BraTS2019, and it was observed that the approach performed better for BraTS2019 than for BraTS2018 (Figure 11). The reason behind this is the format of the data presented to the methodology in accordance with the dataset. Another observation was on behalf of the type of classifier to be used with the LTcP and PLS images.

Figure 8:

Feature fusion result after applying PLS for (A) T1 MRI images, (B) T2 MRI images, and (C) FLAIR MRI images. MRI, magnetic resonance imaging; PLS, partial least squares.

Figure 9:

Final accuracy chart and computational time chart based on the proposed method with different classifiers for BraTS2019 with 35 fused features via PLS. PLS, partial least squares.

Figure 10:

ROC curve for the proposed method on BraTS2019.

Table 2:

Comparison of results based on different classifiers using various numbers of fused features of the image (brats2019)

Number of features usedTechniqueFeature extractionFusion methodAccuracyComputational time (s)
−5SVMLTcPPLS80.282.3
5Naïve Bayes79.681.9
5Softmax79.283.8
5Decision tree7582.6
10SVMLTcPPLS82.385.3
10Naïve Bayes80.182.1
10Softmax82.283.8
10Decision tree79.685.4
15SVMLTcPPLS87.888.9
15Naïve Bayes85.487.9
15Softmax87.385.4
15Decision tree84.886.6
20SVMLTcPPLS88.691.7
20Naïve Bayes88.592.6
20Softmax89.897.8
20Decision tree87.599.3
25SVMLTcPPLS89.792.4
25Naïve Bayes86.798.7
25Softmax88.4101.4
25Decision tree87.5109
30SVMLTcPPLS91.492.6
30Naïve Bayes90.299.7
30Softmax87.5105.4
30Decision tree89.4112.5
35SVMLTcpPLS91.494.4
35Naïve Bayes90.298.7
35Softmax90.3106.1
35Decision tree90.5120.3
Figure 11:

Segmented results on the BraTS19 dataset.

The fused images gradually acquired stable results with the SVM classifier as the number of features increased. Other classifiers showed variations with different numbers of features used, but in the case of SVM, the rate of accuracy increased statistically, and after 30 features, the same result was obtained. The range of features considered here is 28–35. Hence, this is the minimum number of features needed. All the computations were performed in MATLAB software and 2021b version on 16 GB of RAM and 256 GPU system configurations. Table 2 lists the accuracy and computational speed of the proposed method when it is used for BraTS2019, and Figure 10 presents the corresponding graph.

Table 3 lists the accuracy and computational speed of the proposed method when it is used for BraTS2018, and Figure 12 presents the corresponding graph (Figure 13).

Table 3:

Comparison of results based on different classifiers using fused features of image (brats2018)

Number of features usedTechniqueFeature extractionFusion methodAccuracyComputational time (s)
5SVMLTcPPLS81.582.3
5Naïve Bayes81.781.9
5Softmax82.283.8
5Decision tree81.882.6
10SVMLTcPPLS83.585.3
10Naïve Bayes84.382.1
10Softmax80.583.8
10Decision tree81.685.4
15SVMLTcPPLS84.788.9
15Naïve Bayes84.687.9
15Softmax82.385.4
15Decision tree86.586.6
20SVMLTcPPLS85.691.7
20Naïve Bayes84.692.6
20Softmax81.397.8
20Decision tree82.599.3
25SVMLTcPPLS85.292.4
25Naïve Bayes83.598.7
25Softmax81.2101.4
25Decision tree83.7109
30SVMLTcPPLS85.292.6
30Naïve Bayes85.699.4
30Softmax81.2104.8
30Decision tree83.6111.3
35SVMLTcpPLS85.894.4
35Naïve Bayes84.698.7
35Softmax85.8105.3
35Decision tree84.6118.2

PLS, partial least squares.

Figure 12:

Accuracy chart of the proposed method with different classifiers for BraTS2018 with 35 fused features via PLS. PLS, partial least squares.

Figure 13:

ROC curve for the proposed method on BraTS2018.

The ROC curve graph in Figure 13 depicts how well the proposed method works for different thresholds.

Brain tumor classification approaches have attracted increasing attention because of the increasing number of cases of this disease. The proposed method classifies the images based on their features in a very efficient manner. The number of parameters required for accurate classification is reduced drastically, and the time required for classifying the images with 35 fused features is 94.4 s. Thus, this approach can be used for images or methods in which the number of acquired features is very low. We further intend to reduce this time and increase accuracy. Furthermore, we will work on optimizing the time duration of the method for complete classification. The method can also be used for colored images in subsequent studies.

Table 4 represents the comparison among different approaches with respect to accuracy, Recall and F1-score for BRATS18 dataset. It can be observed that the proposed approach works and integrated well with the existing classifiers like SVM, Naïve Bayes, Softmax, and Decision Tree (Figure 14).

Table 4:

Comparison among approaches based on recall and F1 score on the brats2018 dataset

ApproachRecallF1 score
SVM94.6093.20
Naïve Bayes89.4588.80
Softmax94.4592.93
Decision Tree89.3489.35
Proposed approach (SVM as classifier)95.9096.80
Proposed approach (Naïve Bayes as classifier)94.0394.5
Proposed approach (Softmax as classifier)95.1096.95
Proposed approach (decision tree as classifier)93.2094.10
Figure 14:

Segmented results on the BraTS18 dataset.

V.
Conclusion

The proposed work LTcP method, not only yields the highest number of features but also yields the highest classification accuracy when the fused images are combined with different classifiers. It helps to extract the maximum number of features from the image. After extracting the best possible features, we fuse the selected optimal features via the PLS approach, which in turn helps reduce the number of features required for accurate classification. Thus, this approach is beneficial for instances where fewer features are present. We observed that this approach could classify images with very few features, which is far fewer than the number of features gained from LTcP. The fused features were fed to many classifiers, such as Softmax, naïve Bayes, decision tree, and SVM, but SVM yielded good cutting-edge accuracy results. The approach combines three methods, namely, linear iterative clustering, the proposed LTcP, and PLS. The final output gained was satisfactory. Furthermore, research is open to optimization methods with respect to time for complete classification, which can also be used for colored images in upcoming studies. The proposed approach can classify images with 42 features gained from LTcP, which is better than the existing techniques. The fused features were fed to the SVM classifier, which yielded an accuracy of 91.4%. The computational time for 35 features was observed to be 94.4 s for SVM, which itself proves that the proposed method has fast computational speed. The F1-score and Recall for the proposed method were observed to be 95.90 and 96.80, respectively. It depicts that the approach can identify a reasonable number of true positives. However, there are some limitations of the approach. The computational complexity makes it difficult to adapt to other images, like colored images and geo-satellite images. These images require complex preprocessing, and thus, integrating these images with the proposed approach increases the overall complexity of the method.

Language: English
Submitted on: Jul 8, 2025
Published on: Oct 20, 2025
Published by: Professor Subhas Chandra Mukhopadhyay
In partnership with: Paradigm Publishing Services
Publication frequency: 1 times per year

© 2025 Ravi Prakash Chaturvedi, Annu Mishra, Mohd Dilshad Ansari, Ajay Shriram Kushwaha, Prakhar Mittal, Rajneesh Kumar Singh, published by Professor Subhas Chandra Mukhopadhyay
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.