Have a personal or library account? Click to login
DEEP-BTS: Deep Learning based Brain Tissue Segmentation using ResU-Net Model Cover

DEEP-BTS: Deep Learning based Brain Tissue Segmentation using ResU-Net Model

Open Access
|Dec 2025

Abstract

Brain tissue segmentation (BTS) in MRI is essential for diagnosing neurological disorders, mapping brain structures, and analyzing disease progression. A major challenge in BTS is intensity inhomogeneity, where non-uniform illumination in MRI scans causes intensity variations, making it difficult to accurately differentiate gray matter (GM), cerebrospinal fluid (CSF), and white matter (WM). To address these challenges, a novel deep learning-based DEEP-BTS model has been proposed for BTS with brain MRI images. The input images are collected from the BrainWeb dataset, where MRI images undergo skull stripping to remove unnecessary regions. After skull stripping, the collected images are pre-processed using a contrast stretching adaptive trilateral filter (CSATF) to improve image quality, reduce noise artifacts, and perform augmentation to increase data diversity to ensure robust model training. The pre-processed images are then fed into the ResU-Net, which segments different brain tissues, including CSF, GM, and WM. The proposed DEEP-BTS model is evaluated based on its accuracy (AC), specificity (SP), recall (RE), precision (PR), F1 score (F1), Jaccard index (JI), and Dice index (DI). The proposed DEEP-BTS achieved a segmentation accuracy of 98.91 % for BTS. The proposed ResU-Net outperformed Fuzzy C-Means, M-Net, and U-Net methods, achieving 98.33 % CSF, 98.04 % GM, and 99.15 % WM, indicating improved segmentation accuracy.

Language: English
Page range: 371 - 379
Submitted on: Feb 21, 2025
|
Accepted on: Sep 8, 2025
|
Published on: Dec 23, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: Volume open

© 2025 P Sivaprakash, J Banumathi, Ashis Kumar Mishra, P Jayapriya, published by Slovak Academy of Sciences, Institute of Measurement Science
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.