Have a personal or library account? Click to login
Smart IoT based health care environment for an effective information sharing using Resource Constraint LLM Models Cover

Smart IoT based health care environment for an effective information sharing using Resource Constraint LLM Models

Open Access
|Feb 2025

Full Article

1.
Introduction

The advent of IoT technology has significantly reshaped the healthcare landscape by enabling continuous monitoring, real-time diagnosis, and intelligent decision-making. IoT devices, like wearable sensors and connected medical devices, generate large volumes of data that need to be processed and shared effectively among stakeholders, including healthcare providers, patients, and caregivers [1]. However, the successful integration of IoT systems in healthcare requires a seamless mechanism for data analysis and information sharing that is both efficient and reliable. Resource-constrained environments, such as portable or low-power IoT devices, add complexity to this challenge, necessitating innovative approaches to manage and utilize healthcare data effectively [2].

One of the major barriers to efficient information sharing in IoT-enabled HS is the computational demand of large-scale language models (LLMs). Traditional LLMs, while powerful in natural language processing (NLP) tasks, require substantial resources in terms of memory, processing power, and energy, making them impractical for deployment on IoT devices [34]. This limitation hampers the ability to analyse real-time data from IoT devices and derive meaningful insights. Additionally, healthcare data often comes with privacy concerns and time-sensitive requirements, further complicating the integration of NLP models in IoT-based systems.

To overcome these limitations, the proposed research focuses on leveraging a lightweight version of the BERT approach for resource-constrained IoT environments. BERT, known for its efficiency in NLP tasks, was optimized for deployment in low-resource settings to facilitate real-time data analysis and communication [58]. By tailoring BERT to meet the unique requirements of IoT-enabled HS, the proposed solution ensures accurate and timely extraction of critical information while conserving computational resources.

A significant component of the research is the utilize of the MIMIC-III clinical dataset, which contains a rich collection of anonymized patient data from intensive care units (ICUs). The dataset serves as a benchmark to validate the performance of the optimized BERT approach in processing healthcare information [9]. The IoT-based Health Care environment incorporates this model to facilitate seamless information sharing, enabling real-time alerts, diagnosis support, and personalized patient care [1012]. The integration of BERT in this context showcases its potential to bridge the space among advanced NLP techniques and resource-constrained IoT settings.

The proposed Smart IoT-based Health Care Environment addresses the growing need for intelligent, resource-efficient systems in modern healthcare. By combining the capabilities of IoT with a tailored BERT model, this system provides an effective remedy for real-time information sharing and decision support [1314]. Furthermore, the system ensures compliance with healthcare standards and data privacy regulations, making it a viable option for practical implementation in clinical settings [15].

In conclusion, this research underscores AI-driven IoT systems. The optimized BERT model not only addresses the challenges of resource-constrained environments but also enhances the combined productivity of IoT-enabled HS. By providing a practical solution for intelligent information sharing, the proposed system paves the way for improved patient outcomes and operational efficiency in healthcare environments.

1.1
Contribution of the Research
  • The research proposes a Smart IoT-based Health Care Environment utilizing a lightweight BERT model, optimized for resource-constrained environments, to enhance real-time healthcare information sharing and processing.

  • The optimized BERT model is systematically compared against traditional LLMs to demonstrate its efficiency in terms of computational resource utilization and accuracy in healthcare-related NLP tasks.

  • Extensive experiments are conducted using the MIMIC-III clinical dataset, a standard benchmark for healthcare applications, with performance metrics like latency, memory usage, accuracy, precision, and recall evaluated to validate the proposed system's effectiveness in IoT-enabled healthcare environments.

1.2
Structure of the paper

This manuscript is structured as follows: Section 2 reviews related work and relevant studies conducted by various researchers in the domain of IoT-based HS and resource-constrained language models. Section 3 consist an overview of the BERT architecture, its optimization techniques for resource-limited environments, and the integration process within the proposed Smart IoT-based Health Care Environment. Section 4 describes the MIMIC-III clinical dataset, explains the experimental setup, and depicts a thorough assessment of the system’s performance, including efficiency and accuracy metrics. Atlast, Section 5 wraps up the study by summarizing key findings and discussing future directions for enhancing IoT-enabled HSs with optimized AI models.

2.
Application

Xu et al. (2024) [16] proposed an effective system for predicting resource allocation and patient health status in the smart healthcare industry using IoT and Edge computing. The study highlights the increasing reliance on IoT in healthcare, which brings challenges such as high latency and energy consumption due to the extensive data transfer and processing requirements. The proposed approach effectively reduced span, enhanced average resource utilization, and improved load balancing, ensuring accurate real-time health status predictions. However, the system's drawback lies in its dependence on computationally intensive methods like DBN-LSTM and frog leap optimization, which may limit scalability and efficiency in resource-constrained edge environments. Additionally, integrating such advanced techniques into existing healthcare infrastructures may pose implementation challenges.

Chinbat et al. (2024) [17] conducted a comprehensive study on lightweight cryptography (LWC) methods for securing patient data in Internet of Things (IoT) healthcare environments. Machine learning (ML) models were utilized to assess the algorithms using performance metrics. The research identified RECTANGLE as the most efficient algorithm due to its speed, simplicity, and adaptability, making it suitable for IoT in healthcare. However, the study's primary drawback is its limitation to experimental settings on a Raspberry Pi 3 microcontroller, which may not fully represent performance across diverse IoT devices in real-world healthcare scenarios.

Rahman et al. (2024) [18] presents an exhaustive survey on ML and deep learning (DL) in the healthcare domain, highlighting recent advances and applications in intelligent HS. Emphasizing the promising potential of ML and DL to revolutionize healthcare. The research consolidated advancements in ML-healthcare, DL-healthcare, and their combined applications while also identifying emerging opportunities and future guidelines. However, a notable drawback of their study lies in the limited discussion of real-world implementation issues, like ethical concerns, data privacy issues, and the lack of adequate infrastructure in underdeveloped regions. These aspects remain critical for practical applications and widespread adoption of ML and DL in HSs.

Islam et al. (2023) [19] proposed a DL-based IoT system for remote health monitoring and early recognition of health issues in real-time. The system utilizes IoT sensors for measuring blood oxygen levels and heart rate and body temperature. The gathered data is transmitted by utilizing the MQTT protocol to a server, where a pre-trained deep learning model, incorporating a convolutional neural network with an attention layer, analyzes the data. Additionally, the system monitors heart rate and oxygen levels, alerting healthcare professionals when critical abnormalities are detected and facilitating connection with the nearest doctor. However, the primary drawback of this system is the reliance on the availability of stable internet connectivity for real-time monitoring and response, which may not be feasible in rural or remote areas. Furthermore, the classification accuracy of the system might be impacted in cases of sensor errors or noisy data, which can lead to misdiagnoses.

Munnangi et al. (2023) [20] conducted a review of DL in the healthcare sector, focusing on the pros, weaknesses, and research challenges associated with these methods. They propose a deep learning-based technique that addresses the vanishing gradient problem commonly encountered in Recurrent Neural Networks (RNNs). This approach integrates temporal and spatial factors to enhance accuracy and reduce the time required for detecting abnormalities. The recommended Moran Autocorrelation and Regression-based Elman Recurrent Neural Network shows promising outcom, with a 95% accuracy improvement and a 18% reduction in execution time. However, the study acknowledges that while the MAR-ERNN method improves performance, the practical application of such techniques still faces challenges, such as real-time implementation and the complexity of integrating these methods into existing HSs.

Mazin Alshamrani et al. (2022) [21] conducted a comprehensive survey on the infusion of the IoT and AI in remote healthcare monitoring (RHM) systems, emphasizing the transformation of healthcare within smart cities. The study highlights the place of IoT in enhancing healthcare efficiency, reducing costs, and improving patient care through the use of AI and ML for clinical decision support systems. These systems utilize various IoT-based sensors to monitor vital parameters like body temperature and so on. The paper also evaluates the most relevant health IoT (H-IoT) applications and examines the underlying technologies, devices, systems, and models. Despite its strengths, the paper acknowledges several limitations, including the complexity of infusing IoT and AI approaches in real-world healthcare, the potential for data privacy issues, and the challenge of ensuring reliable connectivity and interoperability among various IoT devices. These drawbacks highlight the need for continued research to address the practical implementation of these systems in diverse healthcare environments.

Sujith A.V.L.N. et al. (2022) [22] conducted a systematic review on smart health monitoring (SHM) using DL and AI, highlighting its role in addressing the challenges of healthcare in the context of rapidly evolving technology and emerging diseases. With the increasing complexity of maintaining a healthy lifestyle amid busy work schedules, SHM systems have emerged as a promising solution. These systems have proven to be faster, more cost-effective, and reliable compared to traditional healthcare approaches. Moreover, the use of blockchain frameworks has enhanced data security and privacy, ensuring that patients' confidential information remains protected. The incorporation of DL and ML techniques has further advanced SHM by enabling the early detection of chronic diseases, contributing to preventive healthcare and improved fatality management. Cloud computing and storage have further optimized these systems for real-time, cost-effective services. However, the review also points out several challenges, including issues related to the integration of new technologies, the scalability of solutions.

Thilagam et al. (2022) [23] presents a secure IoT-based HS integrating DL approaches for privacy preservation and data analytics. The system utilizes wearable IoT devices to collect health data, which is preserved in the cloud, but is vulnerable to privacy leakage and unauthorized access. To mitigate these security concerns, the authors introduced a DL-based approach utilizing CNNs to examine health data while protecting sensitive information. The system incorporates a secure access control mechanism based on user attributes, ensuring that trust and privacy are maintained. The recommended CNN classifier attained an impressive 95% accuracy improving further as the training set size increased. Additionally, data augmentation enhanced performance, although the system performed better without it. The model demonstrated around 98% accuracy with an increased user count, indicating its robustness and effectiveness in reducing privacy leakage and ensuring data integrity. However, the reliance on cloud storage poses ongoing risks related to unauthorized access, and the scalability of the system may be challenged as user counts and data volume grow, requiring further improvements in security mechanisms.

Zobaed et al. (2021) [24] discuss the growing impact of the IoT in healthcare, highlighting its application in real-time monitoring of human activities and patient records. The authors emphasize the advantages of IoT, such as accessibility, adaptability, portability, and energy efficiency, which have made it integral to various domains like wearable devices, smart cities, and healthcare. Specifically, in healthcare, IoT systems help mitigate the limitations of traditional healthcare facilities. The infusion of DL into IoT HSs allows for handling complex data without requiring feature engineering, thus enhancing system efficiency. However, the paper also identifies several challenges, including the need for high-quality data, real-time processing, and the complexity of implementing DL models on IoT devices with minimal computational resources. These drawbacks present significant barriers to the widespread adoption of IoT-relied HSs and deep learning models in healthcare applications.

A. I. Newaz et al. (2019) [25], introduces a ML-based security approach designed for Smart HSs (SHS). The integration of IoT and pervasive computing in medical devices has made HSs smarter, allowing for continuous monitoring and detection of critical conditions through wearable devices and implantable medical devices. However, these advancements bring about significant security concerns, as attackers can manipulate vital signs or disrupt the system's normal functions. It analyses vital signs data from various connected devices and correlates changes in the patient's body functions to differ among benign and malicious events. The framework was trained on data from 8 varied smart medical devices, involving twelve benign events and three types of malicious threats. Despite its promising results, one limitation of HealthGuard is its dependency on the quality and diversity of the data utilized for training. while the training data does not sufficiently cover the variety of potential malicious activities or the diversity of devices, the system may not perform as effectively in real-world scenarios.

3.
Proposed Methodology

The proposed Smart IoT-based Health Care Environment begins with Data Preprocessing, including data normalization and encoding of healthcare information and tokenization to prepare it for analysis. The lightweight BERT model is optimized for resource-constrained IoT environments to efficiently process and analyse real-time patient data. The IoT devices collect continuous health data, and the optimized BERT model captures essential patterns and relationships in the data, ensuring accurate and timely information sharing. Finally, the Output Layer classifies and shares critical healthcare information, achieving high efficiency and accuracy for real-time decision support in IoT-enabled HSs.

Figure 1:

Architecture for Proposed Model

3.1
Materials and Methods

For the proposed Smart IoT-based Health Care Environment, we utilized the MIMIC-III (Medical Information Mart for Intensive Care) dataset. This publicly available dataset contains comprehensive, de-identified health data from over 40,000 ICU admissions across various hospitals. The dataset consist a wide range of clinical information like vital signs, laboratory test results, medications, diagnoses, and patient demographics. Furthermore, it offers time-series data, allowing for the analysis of patient conditions over time. The MIMIC-III dataset serves as a valuable resource for training and validating ML approaches in healthcare applications, particularly for real-time decision support and anomaly detection in IoT-enabled health systems. The dataset was utilized to examine the efficacy and accuracy of the optimized BERT procedure in processing and sharing critical healthcare information.

3.2
Data Preprocessing

The dataset is first cleaned by removing any missing or incomplete records. Incomplete data entries are either imputed or discarded based on their significance to the analysis. Numerical values, such as vital signs or sensor readings, are normalized to ensure uniformity across features.

Categorical labels in the dataset (e.g., diagnosis types, health conditions) are encoded into numerical values. Label encoding is used to convert non-numeric labels into a format suitable for machine learning algorithms.

All text is converted to lowercase to maintain consistency. Common words that do not add meaningful information (e.g., "the", "and", "is") are removed. Any non-alphanumeric characters, such as punctuation marks, are discarded.

The processed text is tokenized, partition text into smaller units (tokens) like words or subwords. Tokenization also involves converting words into corresponding tokens using BERT's pre-trained tokenizer, which further converts these tokens into numerical IDs.

Since BERT requires a fixed input length, the tokenized text is padded to a maximum length (e.g., 512 tokens) or truncated if the text exceeds this length. This ensures that all inputs to the model are of uniform size.

3.3
Proposed Model
3.3.1
Bidirectional Encoder Representations from Transformers(BERT)

BERT is a transformer-based model Vaswani et al. (2017) designed to handle natural language understanding (NLU) tasks, like question answering, sentiment analysis, and named entity recognition. Unlike traditional unidirectional models, BERT processes text bidirectionally, allowing it to capture context from both the left and right of each word in a sentence. This unique feature gives BERT a significant advantage in understanding the meaning of words in context.

3.3.1.1.
Transformer Architecture

The Transformer relies on self-attention mechanisms that allow it to weigh the significance of each word in a sequence relative to the others. The architecture involves of an encoder-decoder structure, but BERT only utilizes the encoder part for tasks related to understanding the input text.

The encoder is formulated of multiple layers, each of which has two primary components:

  • Self-attention mechanism:

    This allows the model to attend to varied areas of the input sequence, adjusting the weights based on their relevance.

  • Feed-forward neural network:

    This processes and refines the results from the self-attention mechanism, delivering the ultimate representation of the input sequence.

  • Mathematically, the self-attention mechanism computes the attention score:

    Attention(Q,K,V) between three matrices: Query Q, Key K, and Value V, as follows: 1Attention(Q,K,V)=softmax(QKTdk)V

The softmax function normalizes the attention scores, ensuring they sum to 1.

3.3.1.2.
Bidirectional Contextualization

BERT's key innovation lies in its bidirectional approach. Traditional models process text in a left-to-right or right-to-left fashion, meaning they only use one context to predict the next word. In contrast, BERT uses both left and right context to understand the meaning of every word in a sentence simultaneously. This is attained by training the model with a masked language model (MLM) objective.

In MLM, a certain proportion of input tokens are substituted with a special [MASK] token at random. The model is then trained to predict the original token that was replaced. BERT learn the relationship among words in both directions (left and right). The prediction of the masked token xi is done using the following equation: 2P(xix1,,xi1,xi+1,,xn)P(xi[MASK],)

Here, the model learns to predict the missing token using the surrounding context, allowing it to build a deep understanding of word meanings based on both preceding and succeeding words.

3.3.1.3
Pre-training and Fine-tuning

BERT follows a two-step approach: pre-training and fine-tuning.

Pre-training: In this stage, BERT is trained on a large corpus (like Wikipedia and BooksCorpus) using two tasks:

  • Masked Language Modeling (MLM):

    As mentioned, this involves masking some tokens and training the model to predict them.

  • Next Sentence Prediction (NSP):

    BERT is trained to predict whether one sentence follows another in the text. This utilizes the model understand sentence-level relationships.

    During pre-training, BERT learns generic language representations, which can be transferred to a variety of downstream tasks. Pre-training is computationally expensive and takes a significant amount of time and resources.

  • Fine-tuning:

    After pre-training, BERT is fine-tuned on task-specific datasets. The model's parameters are adjusted to optimize performance for a particular task, such as sentiment analysis or medical text classification. Fine-tuning is relatively quick compared to pre-training, as the model has already learned general language patterns.

3.3.1.4.
BERT Model Architecture

The BERT model consists of a stack of Transformer encoder layers. Every encoder layer has two main subcomponents:

A multi-head self-attention procedure that allows the model to focus on varied parts of the input sequence.

A position-wise feed-forward network that processes the attention outputs.

Every subcomponent is followed by a layer normalization and residual connection to ensure stable training and avoid vanishing gradients. The input embeddings are the sum of token embeddings, segment embeddings (to distinguish between different sentences), and positional embeddings (to capture word order).

The overall architecture of BERT can be represented as: 3 BERToutput=Transformerlayers(E(Input)) $$BER{T_{output}} = Transforme{r_{{\rm{l}}ayers}}\left( {E\left( {Input} \right)} \right)$$

Where E(Input) is the embedding of the input tokens, and the Transformer layers process the embeddings through multiple layers of self-attention and feed-forward networks.

3.3.1.5.
Visualizing BERT Layers

Each layer in BERT is a block of the Transformer architecture, and each block performs two primary operations:

  • Self-Attention: Allows the model to utilize the entire input sequence at once, weighing the importance of each word relative to others.

  • Feed-Forward Network: After attention scores are calculated, a feed-forward neural network is applied to the attention output.

The layers are stacked, with each layer refining the understanding of the text. The number of layers in BERT can vary, with models like BERT-base having 12 layers and BERT-large having 24 layers. The model's architecture allows it to build highly contextualized word representations.

3.3.1.6.
Applications and Performance

BERT has revolutionized the field of NLP by attaining state-of-the-art performance on numerous benchmark tasks, like the GLUE (General Language Understanding Evaluation) benchmark, question answering tasks (like SQuAD), and more. By pre-training on a massive corpus of text and then fine-tuning for specific tasks, BERT can generalize well across many NLP applications.

The model's bidirectional nature allows it to capture rich contextual relationships, making it particularly effective for tasks that require understanding ambiguous language or long-range dependencies, like medical text classification, sentiment analysis, and named entity recognition.

3.3.1.7.
Limitations and Improvements

Despite its success, BERT has some limitations, primarily its large computational requirements. Training BERT from scratch requires substantial computational resources, which is why models like DistilBERT and ALBERT have been developed as more efficient variants. These models aim to reduce BERT's size while maintaining performance.

In the context of IoT-based HSs, lightweight versions of BERT, like TinyBERT, are useful as they can be utilized on resource-constrained devices while still providing high-quality natural language processing capabilities.

3.3.2
Resource-Constrained BERT for Healthcare IoT

The proposed model leverages a Smart IoT-based Health Care Environment, integrating IoT devices with a lightweight version of BERT optimized for resource-constrained environments. This hybrid approach ensures real-time data processing and effective information sharing for healthcare applications. By combining the strengths of IoT-based monitoring and AI-powered natural language processing, the model delivers a comprehensive solution for HSs that require timely and accurate insights.

The system processes incoming health data from IoT devices, including normalization of sensor readings and encoding of categorical labels (such as patient conditions) to ensure compatibility with the model. This step also includes tokenization for medical text data (e.g., clinical notes), ensuring that both structured and unstructured data can be seamlessly processed by the BERT model.

The BERT model is fine-tuned to run efficiently on IoT devices, reducing computational overhead while maintaining high accuracy in processing medical text data. This includes clinical notes or diagnosis reports that contain valuable information for healthcare professionals. The model's optimization for resource-constrained environments allows it to handle both structured sensor data (such as vital signs) and unstructured medical text with minimal latency.

By integrating BERT with real-time data from IoT sensors, the model provides immediate insights for healthcare providers, enabling timely responses to health anomalies, personalized care, and early diagnosis. This real-time capability ensures that healthcare professionals are alerted to potential issues as soon as they arise, facilitating faster intervention and improved patient outcomes.

The lightweight design of the model ensures it can scale to various IoT devices without compromising performance, making it ideal for healthcare environments with limited computational resources. Whether deployed in hospitals, clinics, or home care settings, the model adapts to the needs of different healthcare environments, maintaining high efficiency across a range of devices.

The model's output layer classifies health-related data into categories such as normal or abnormal, providing actionable insights for healthcare professionals in real-time. These insights support decision-making, enabling healthcare providers to quickly assess a patient's condition and take necessary actions based on the model's predictions.

Additionally, the integration of this model within the healthcare ecosystem enhances the potential for real-time health monitoring and improved patient care. By combining advanced AI techniques like BERT with the scalability and flexibility of IoT, this system bridges the gap between cutting-edge technology and practical healthcare applications. With its focus on efficiency and resource optimization, the model is well-suited for widespread deployment in diverse healthcare settings, providing a powerful tool for enhancing the quality of care and ensuring patient safety. This approach allows for efficient, scalable, and context-aware healthcare information sharing, making it an essential tool for modern HSs.

4.
Results and Discussions

This section evaluates the proposed Smart IoT-based Health Care Environment within resource-constrained learning frameworks, focusing on key performance metrics and assessing the effectiveness of the optimized BERT model in processing healthcare data. The evaluation is carried out using the MIMIC-III clinical dataset, highlighting the model's efficiency, accuracy, and real-time information sharing capabilities for IoT-enabled HSs.

4.1
Implementation Details

The proposed model was developed using Python 3.9, leveraging libraries such as TensorFlow, Keras, Matplotlib, NumPy, Pandas, and Seaborn for evaluation and visualization of the healthcare information processing system. The experiments were utilizingTesla GPU, ensuring efficient execution and performance analysis of the Smart IoT-based Health Care Environment. This setup facilitated real-time processing of healthcare data, allowing for accurate and timely predictions using the optimized BERT model in a resource-constrained IoT environment.

4.2
Performance Metrics

To evaluate the effectiveness of the recommended approach, several performance metrics are utilized, and the outcomes are compared with other advanced deep learning models. These metrics, which provide a holistic view of the model's capabilities, include accuracy, precision, recall, specificity, and the F1-score. Every metrics offers significant insights into the model's ability to correctly classify data and handle different types of errors.

Table 1 outlines key performance metrics to examine the effectiveness of classification models. Overall Accuracy measures the proportion of correctly classified instances (both true positives and true negatives) out of all predictions, providing a general sense of model performance. Positive Predictive Value (Precision) calculates the accuracy of positive predictions, indicating how often the model correctly identifies positive cases. Sensitivity (Recall) quantifies the model's ability to correctly detect positive instances, emphasizing its ability to minimize false negatives. The Harmonic Mean (F1-Score) combines precision and recall, offering a balanced measure when dealing with imbalanced classes. Lastly, True Negative Rate (Specificity) focuses on the model's ability to correctly identify negative instances, minimizing false positives. These metrics collectively provide a comprehensive evaluation of a model's performance, ensuring that both positive and negative classifications are appropriately handled.

Table 1:

Performance Metrics Evaluation

SL.NOEvaluation MetricsFormula
1Overall AccuracyTP+TNTP+TN+FP+FN
2Positive Predictive Value (Precision)TNTN+FP
3SensitivityTPTP+FN×100
4Harmonic-Mean 2.Precison*RecallPrecision+Recall ${\rm{2}}.{{Precison*Recall} \over {Precision + Recall}}$
5True Negative Rate (Specificity)TNTP+FP

Table 2 presents a comparison of the performance metrics for various ML algorithms, including Support Vector Machine (SVM), CNN, LSTM, Gated Recurrent Unit (GRU), and the proposed model. While the SVM and CNN models show moderate performance with accuracy values around 0.77 and 0.79, the LSTM and GRU models demonstrate substantial improvements, achieving accuracies of 0.85 and 0.89, respectively. The proposed model outperforms all others with an impressive accuracy of 0.99, alongside high precision (0.98), recall (0.98), and specificity (0.98), indicating its superior ability to correctly classify both positive and negative instances in the dataset. This exceptional performance highlights the effectiveness of the proposed model in providing accurate and reliable results for the targeted task.

Table 2:

Performance Comparison Across Different Models

AlgorithmAccuracyPrecisionRecallF1-ScoreSpecificity
SVM0.770.780.780.790.78
CNN0.790.80.790.80.8
LSTM0.850.860.880.870.88
GRU0.890.90.90.890.9
ProposedModel0.990.980.980.970.98

Figure 2 provides a visual comparison of the performance metrics—accuracy, precision, recall, F1-score, and specificity across SVM, CNN, LSTM, GRU, and the recommended approach. It clearly shows that while traditional models like SVM and CNN offer modest performance, with accuracy values around 0.77 and 0.79, the LSTM and GRU models significantly outperform them, with accuracies reaching 0.85 and 0.89, respectively. The proposed model stands out as the most effective, achieving near-perfect accuracy of 0.99 and excelling in all other metrics. This reinforces the model's ability to accurately detect and classify both positive and negative instances, demonstrating its optimal outcome in comparison to the varied approaches tested.

Figure 2:

Performance Evaluation of Different Models

Figure 3 presents the ROC curve values demonstrate the progressive improvement in the model's ability to distinguish among classes as we move from SVM to the proposed model. Initially, the SVM and CNN models show relatively low performance, with values starting around 0.34 and 0.38, respectively. As we progress through LSTM and GRU models, their ROC scores increase, reflecting better classification capability, with GRU reaching 0.79 by the 5th row. The proposed model consistently outperforms the others, starting at 0.4 and steadily climbing to a perfect score of 1.0 by the 8th row. This sharp rise indicates that the proposed model is exceptionally effective in correctly recognize both positive and negative instances, achieving optimal sensitivity and specificity across all thresholds.

Figure 3:

ROC Curves across different algorithms

5.
Results and Discussions

In conclusion, the proposed Smart IoT-based Health Care Environment, powered by an optimized BERT model, demonstrates exceptional performance in processing and analysing healthcare data from IoT devices. The model's bidirectional nature, combined with resource optimization, ensures that it can deliver accurate predictions while being computationally efficient, even in resource-constrained environments. The results, highlighted by a 99% accuracy rate, showcase its ability to effectively handle real-time healthcare data, making it highly suitable for applications such as patient monitoring, disease detection, and medical decision support. Furthermore, the proposed system is versatile, capable of being integrated into various IoT-based healthcare setups, offering scalability and adaptability for diverse healthcare applications. However, future research can focus on elevating the approach's robustness by incorporating multimodal data sources, such as audio and video, alongside sensor data. Further investigation into real-world deployment scenarios, including privacy and security considerations for patient data, will also be crucial for ensuring the model's practical application in healthcare settings. Expanding the system's capabilities to handle larger datasets and integrating with electronic health record (EHR) systems could open new avenues for predictive analytics and personalized healthcare solutions.

Language: English
Page range: 133 - 147
Submitted on: Sep 15, 2024
Accepted on: Oct 23, 2024
Published on: Feb 24, 2025
Published by: Future Sciences For Digital Publishing
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2025 Metti Vinodh Kumar, G.P. Ramesh, published by Future Sciences For Digital Publishing
This work is licensed under the Creative Commons Attribution 4.0 License.