<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
        <title>Journal of Smart Internet of Things Feed</title>
        <link>https://sciendo.com/journal/JSIOT</link>
        <description>Sciendo RSS Feed for Journal of Smart Internet of Things</description>
        <lastBuildDate>Sun, 10 May 2026 13:18:18 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>All rights reserved 2026, Future Sciences For Digital Publishing</copyright>
        <item>
            <title><![CDATA[Integrated information modeling-based cloud-connected ultrasound diagnostic systems]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2025-0002</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2025-0002</guid>
            <pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Increased use of portable and intelligent diagnostic devices has spurred the use of ultrasound imaging, cloud computing, and machine learning (ML). This paper outlines the design and implementation of an intelligent diagnosis framework for networked portable ultrasound systems based on cloud infrastructure. The system is structured around a modular pipeline that mimics cloud transmission effects, extracts waveform features, and uses machine learning models for anomaly detection. Functional disturbances such as signal delay, packet loss, and overheating were simulated, and signal-based characteristics were derived to detect anomalies. A combination of autoencoder, isolation forest, and one-class support vector machine (SVM) models was shown to achieve a detection rate of up to 94% for four anomaly classes. Simulations for adaptive routing also illustrated a power efficiency gain of 18%. The results verify the practicability of real-time monitoring and ML-aided diagnostics being incorporated in cloud-linked ultrasound machines.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Investigating the role of NLP in bridging human and machine communication]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2025-0001</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2025-0001</guid>
            <pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Natural Language Processing (NLP) has emerged as a transformative force across multiple domains, enhancing communication, automation, and decision-making. This review synthesizes recent advancements in NLP, with a particular focus on machine translation, bias detection, sentiment analysis, and AI-driven chatbots. The integration of artificial intelligence has significantly improved machine translation accuracy, yet challenges such as algorithmic bias and ethical considerations persist. Studies also highlight NLP’s role in cross-cultural communication, information retrieval, and big data analytics, particularly in developing economies. Furthermore, research on Large Language Models (LLMs) underscores both their potential in automating knowledge retrieval and their susceptibility to adversarial manipulation. Additionally, NLP applications in education, healthcare, and urban planning demonstrate their expanding influence in real-world scenarios. However, concerns regarding data privacy, transparency, and inclusivity remain pressing issues. By evaluating current methodologies, challenges, and future directions, this review underscores the need for ethical AI development and the continuous refinement of NLP models to foster responsible and inclusive digital transformation.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Machine learning-based classification of DNA sequences for diabetes mellitus type prediction]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2025-0003</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2025-0003</guid>
            <pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

A machine learning (ML) algorithm is used to classify DNA sequences and predict diabetes risk using the results of this study. Researchers use the INS Insulin Dataset to explore multiple preprocessing strategies such as k-mer representations, ordinal encodings, oversamplings, and min-max normalizations of DNA sequences from diabetic and non-diabetic subjects. The performance of the model was enhanced by using feature selection techniques such as F-regressors and Mutual Information. A study based on accuracy, precision, recall, and F1-score values has been done on four bioinformatics classifiers, including Random Forest, Gaussian Naive Bayes, and Support Vector Machines (SVM). Results demonstrated that Random Forest achieved the highest accuracy (0.89 with F-regressor), followed by SVM and Decision Tree, while Gaussian Naïve Bayes showed moderate performance. The findings highlight the effectiveness of machine learning in uncovering genetic patterns associated with diabetes and emphasize the potential of DNA-based predictive modeling in precision medicine. This work contributes to advancing computational genomics and provides a foundation for early diagnosis and personalized treatment strategies for diabetes mellitus
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Analyzing how MIS can optimize the distribution of energy in smart grids, focusing on data-driven decision-making processes]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2025-0004</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2025-0004</guid>
            <pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

This paper evaluates how MIS advance the capabilities of smart grids by helping implementers make optimal decisions in areas such as energy distribution and demand improve the mechanisms for monitoring smart grids. The use of management information systems in energy distribution has changed the administration of smart grids. An integrated power network that uses advanced technologies for the efficient distribution and consumption of resources and for sustainability. MIS offers the basis for automatically and dynamically developing new energy networks. A qualitative analysis of information gathered to assess the contribution of MIS in enhancing the effectiveness of smart grids. International experiences in smart grids and related empirical evidence incorporated in forming the case studies are examined to assess their efficiency. Primary data sources include operation statistics and trends in energy use and other parameters, as well as analytical models employed by MIS frameworks. This statistical modeling and machine learning algorithms are used to run different performance analyses and make predictions in response to the different conditions of the grid. The noted results point directly at the potential for MIS in the context of optimizing energy distribution in smart grids. MIS provides real-time decision-making and control over the energy grid through competent data analytics and forecasting, thus reducing downstream energy wastage while improving the reliability of the grid. MIS is an important enabler of the shift towards sustainable energy systems by enabling optimal resource allocation and incorporating renewable resources. This research underscores the requirements to commit more in subjects of MIS technologies and the integrated public and private interaction between regulators, technologists, and energy providers to get the enhanced intelligent grid.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Feasible Implementation of Explainable AI Empowered Secured Edge Based Health Care Systems]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0008</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0008</guid>
            <pubDate>Tue, 25 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The Infusion of Explainable Artificial Intelligence (XAI) in secured edge-based healthcare systems addresses the critical challenges of ensuring trust, transparency, and security in sensitive medical applications. Existing healthcare systems leveraging traditional AI methods often face issues such as lack of interpretability, data privacy risks, and inefficiencies in real-time decision-making. These limitations hinder user trust and the adoption of AI solutions in clinical and edge environments. To overcome these challenges, we propose an XAI-empowered secured edge-based healthcare framework utilizing deep learning (DL) models, specifically Long Short-Term Memory (LSTM) networks, for accurate and interpretable diagnosis. The system incorporates the UNSW dataset to train and validate the model for healthcare anomaly detection and prediction tasks. By embedding XAI methodologies, the proposed framework ensures that decision-making processes are transparent and understandable to healthcare professionals, fostering trust and enabling better clinical decision-making. Our implementation addresses the critical need for secure and real-time healthcare analytics at the edge while maintaining high accuracy and privacy. Through rigorous experimentation, the proposed system achieves a remarkable accuracy of 99%, demonstrating its potential to revolutionize edge-based healthcare solutions. This research highlights the synergy between XAI, edge computing, and DL techniques in advancing secured and interpretable healthcare systems for real-world applications.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Ensembled combination of Q-Learning and Deep Extreme learning machine to achieve the high performance and less latency to handle the large IoT and Fog Nodes.]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0015</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0015</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The proliferation of IoT devices and the adoption of Fog computing architectures have transformed data processing and real-time decision-making across various domains. These advancements enable seamless connectivity and distributed computational power, fostering the development of more intelligent systems. However, managing large-scale IoT and Fog networks presents critical challenges, including high latency, inefficient resource utilization, and scalability limitations, which can undermine system performance. To address these challenges, this research proposes an innovative framework combining Q-Learning and Deep Extreme Learning Machine (DELM). Q-Learning optimizes resource allocation by intelligently learning and adapting to dynamic network conditions, ensuring efficient utilization of resources. It enhances decision-making processes by identifying optimal strategies to manage complex IoT and Fog environments. Meanwhile, DELM provides high-speed and accurate data processing capabilities, enabling it to handle the intensive computational demands of large-scale networks. By leveraging the complementary strengths of these methods, the framework aims to enhance latency, resource utilization, and scalability in large-scale environments. Extensive experimental evaluations validate the framework’s effectiveness, demonstrating significant reductions in latency, improved computational efficiency, and enhanced throughput. Furthermore, the framework efficiently handles complex data processing tasks with minimal overhead, making it suitable for diverse real-time applications across IoT and Fog systems. This study highlights the transformative potential of the proposed approach, offering high performance and real-time efficiency for complex, large-scale IoT and Fog computing environments.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Enhancing Blockchain Framework Using Web3.0 for IoT Based Plant Disease Detection System]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0019</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0019</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Plant diseases are considered as the major bottleneck for the farmers to monitor and diagnosis with the super intelligent methods. With the onset of Artificial Intelligence, Internet of Things(IoT), predicting the plant diseases in early stages has given the bright light of hope to farmers for boosting the productivity of agriculture in which increases the country’s economy. But the current advances in the IoT-AI-driven data gathers data from the agricultural fields and integrates the strong communication system for the early prediction and diagnosis process. Though these intelligent drive systems have several advantages, these procedure have been suffering from the varied security challenges which accelerates an outcry for a cognitive systems in the form of data breaches and privacy problems. The protection of patient data remains a significant concern due to the sensitivity and value of healthcare information, especially when transmitted over the Internet. This has heightened the demand for secure systems to safeguard against data breaches and privacy issues. Similarly, in agriculture, security challenges have been addressed using Web 3.0 and Blockchain technologies, which are favored for their immutable and decentralized features. This research proposes an advanced Web 3.0 Ensemble hybrid blockchain framework to enhance authentication security within the agricultural sector. To improve the authentication process further, chaotic maps are used to generate highly dynamic hashes during the creation of genesis blocks, ensuring that all data is securely stored in the recommended approach. The framework was tested on the Ethereum blockchain using Web 3.0, with Python 3.19 as the primary programming language for developing various interfaces. The security strength of the framework was thoroughly assessed using NIST standard tests, and its robustness was compared with other blockchain models. The results demonstrate that the proposed framework provides stronger defenses against various attacks and surpasses varied approaches in terms of complexity and robustness.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Smart IoT based health care environment for an effective information sharing using Resource Constraint LLM Models]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0017</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0017</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The integration of Internet of Things (IoT) technologies with Healthcare system(HS) has revolutionized patient monitoring, diagnosis, and treatment. Effective information sharing in such systems is hindered by resource constraints, including limited computational capacity, energy consumption, and the need for real-time data processing. Existing systems often struggle to deploy large-scale language models (LLMs) in resource-constrained environments, limiting their ability to analyse and communicate critical healthcare information effectively. To resolve these issues, this study recommends a Smart IoT-based Health Care environment utilizing a lightweight BERT (Bidirectional Encoder Representations from Transformers) model for efficient and scalable information sharing. By optimizing BERT for resource-limited IoT devices, the proposed system ensures real-time processing of patient data while maintaining accuracy and efficiency. The system was examined by utilizing the MIMIC-III clinical dataset, focusing on real-time health monitoring and communication between connected devices. Results demonstrated improved computational efficiency, reduced latency, and enhanced accuracy in extracting and sharing critical healthcare information. This innovative approach bridges the gap between IoT and healthcare by providing a resource-efficient solution for intelligent information sharing.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Optimized Deep learning Frameworks for the Medical Image Transmission in IoMT Environment]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0018</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0018</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The Internet of Medical Things (IoMT) is reforming healthcare by enabling interconnected medical devices and systems to facilitate efficient data collection, transmission, and analysis. While IoMT has significantly improved real-time monitoring and personalized care, the transmission of high-resolution medical images remains a challenge due to bandwidth constraints, latency issues, data loss, and computational overhead. Efficient and secure medical image transmission is critical to ensuring reliable diagnostics and timely patient care in this ecosystem. This research presents an optimized Deep Learning (DL) architecture developed to overcome the limitations of medical image transmission in IoMT environments. The proposed solution incorporates Convolutional Neural Networks (CNNs) for spatial feature extraction and dimensionality reduction while preserving diagnostic-critical information, and Long Short-Term Memory (LSTM) networks to manage sequential data and mitigate transmission issues such as packet loss and latency. The framework incorporates robust encryption mechanisms to ensure data security without significantly increasing computational overhead. Once predictions are made, the data is securely transferred to the cloud for further analysis and storage. Furthermore, Hippopotamus Optimization is utilised to enhance the model's performance and fine-tune hyperparameters, improving both efficiency and accuracy. Performance evaluations were conducted using real-world medical image datasets under varying IoMT network conditions. The results demonstrate that the proposed CNN-LSTM framework delivers superior performance across key metrics, like Peak Signal-to-Noise Ratio (PSNR), accuracy, F1 score, specificity, and sensitivity. Additionally, the framework optimizes encryption and decryption times and reduces bandwidth consumption, ensuring efficient and secure data transmission. This approach showcases a significant advancement in IoMT-based medical imaging, paving the way for enhanced reliability and efficiency in healthcare delivery systems.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Design and Implementation of the Deep Reinforcement Energy Efficient Routing for the Fog-BAN-Cloud of Things using Smart Health care applications]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0011</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0011</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The integration of Fog Computing, Body Area Networks (BANs), and the Cloud of Things (CoT) has revolutionized smart healthcare applications, enabling real-time data processing, seamless connectivity, and efficient resource management. However, the growing demands for energy-efficient operations and reliable data transmission in these systems present significant challenges. This study proposes the development of a Deep Reinforcement Learning (DRL)-based energy-efficient routing algorithm tailored for Fog-BAN-Cloud architectures in healthcare applications. The proposed solution leverages DRL models to dynamically optimize routing paths and scheduling policies, minimizing energy consumption while maintaining high Quality of Service (QoS). The routing algorithm prioritizes low-energy paths in BAN and Fog networks. The paper specifically employs Proximal Policy Optimization (PPO), a reinforcement learning technique, to optimize the routing decisions by considering factors including energy consumption, network congestion, and data traffic conditions. PPO is used to dynamically adjust the policy updates, ensuring stability while reducing power usage and improving data transmission efficiency. Extensive simulations highlights the performance of the proposed model, demonstrating potential improvements in energy efficiency, reduced latency, and enhanced data reliability compared to traditional methods. This work highlights the potential of intelligent algorithms to address the unique challenges of healthcare-driven IoT ecosystems, providing a scalable and sustainable solution for energy-efficient routing in Fog-BAN-Cloud environments. The proposed approach is a promising strategy for optimizing IoT-driven smart healthcare systems.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Chat-GPT Powered IoT devices using regularizing the data for an efficient management systems]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0020</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0020</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Fetal Electrocardiogram (FECG) signals represent vital instruments for examining any irregularities in the heart’s functioning. Contemporary wearable technologies, including smartwatches and smartphones, now come with sophisticated sensors and computational systems designed to gather and process FECG signals from users. Lately, large language models such as T5 have attracted interest due to their capabilities in handling intricate patterns within data, positioning them as promising options for classifying morphological FECG signals and detecting arrhythmias. Nevertheless, diagnosing FECG signals on devices with limited resources presents considerable challenges owing to the complicated nature of the signals and the computational demands of implementing such algorithms on wearable tools. To tackle these difficulties, this paper suggests a strategy that merges T5-based learning methodologies to attain two main goals: (i) reducing the complexity of learning models without sacrificing diagnostic precision and (ii) ensuring performance in resource-limited wearable devices for ongoing monitoring of FECG signals. The research further investigates the implementation of the suggested T5-based algorithm through Software Codesign techniques to improve resource efficiency, concentrating on factors like reduced latency, decreased hardware usage, and enhanced energy efficiency. Comprehensive experiments were conducted using diverse FECG datasets and validated. The proposed T5-based methodology demonstrated significant improvements in diagnosing FECG signals compared to other learning frameworks, showcasing its effectiveness in managing complex data patterns and achieving high diagnostic performance. While the experimental findings highlight the T5-based model's potential for use in wearable devices. the study focused on the algorithm's adaptability and performance in software environments, paving the way for future exploration into resource-efficient implementations suitable for wearable applications.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Health-FoTs – A Latency Aware Fog based IoT Environment and Efficient Monitoring of Body’s Vital Parameters in Smart Health care Environment.]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0010</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0010</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Internet of Things(IoT) integrated with the disruptive technologies are gaining prominence and they have extend their abilities in all domains such as automotive, health care and automation. IoT is connecting the billions of devices and humans to bring the fruitful advantages to society. Since IoT devices are operated with the centralized cloud environment, pervasive and continuous monitoring of the user information can be facilitated. However, owing to the inherent characteristics of cloud, such as large end-to-end latency, larger bandwidth consumption, handling the larger volume of data from the IoT devices would be bottleneck for implementing the IoT for the smart health care system that aids for the treatment and diagnosis process. To address these issues, this research article proposes powerful paradigm, Heath-FoTs (Fog of things) which incorporates the fog devices where the data are processed and filtered near the IoT nodes which is useful for improving the quality of services. To further improve the speed of communication, distributed fogs are introduced between the IoT devices and Cloud to process the health care data and provides the optimal solution to tackle the latencies problems and bandwidth requirements. The complete experimentation is carried out using the NodeMCU and Raspberry Pi 3 Model in which the MQTT(Message Queuing Telemetry Transportation) protocol is used as the major communication protocol between the IoT and Fog Nodes. To evaluate the proposed model, performance metrics such as latency, throughput, and communication cost is measured and compared with the traditional environments. Results demonstrate the Health-FoTs environment has shown the promising performance with the 23% lesser latency, 32% higher throughputs and 25% less communication overhead than the traditional IoT infrastructure and proves its strong place for the high speed health care environment.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[To design and implement the QoS -Aware Energy Efficient Routing Mechanism for the BAN-IoT networks in Smart Health care Applications.]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0012</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0012</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The integration of Internet of Things (IoT) and miniature implantable sensors has significantly advanced Body Area Networks (BAN) for real-time monitoring of patients' physiological vital signs, including Electrocardiogram (ECG), Electromyogram (EMG), and Electroencephalogram (EEG). However, the limited bandwidth of wearable nodes and frequent body movements result in recurrent topological changes, causing unreliable and delayed transmissions. To address these challenges, a reliable and low-latency routing mechanism is required to ensure lossless data transfer and support timely clinical treatments. This paper proposes a QoS-aware routing protocol that dynamically calculates optimal routing paths by integrating Chaotic Theory with the Honey Badger Optimization (HBO) algorithm. The proposed protocol selects latency-aware, energy-efficient paths using key metrics such as Link Quality Factor (LQF), Distance (D), Received Signal Strength Indicator (RSSI), and Number of Hops (NoH) to achieve reliable and efficient data transmission. Comprehensive experiments are conducted in the Python 3.9 environment, and performance metrics including Packet Delivery Ratio (PDR), End-to-End Delay, Throughput, Routing Load, and Control Packet Overhead are calculated and compared with other existing optimization models. The proposed model is also statistically validated against state-of-the-art protocols. Results demonstrate superior performance and stability, effectively overcoming existing bottlenecks and providing improved QoS for IoT-enabled BAN systems in smart healthcare applications.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[A Smart Irrigation System Using the IoT and Advanced Machine Learning Model]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0009</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0009</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The increasing global demand for efficient water management has underscored the importance of smart irrigation solutions in agriculture. This research introduces a Smart Irrigation System that integrates the Internet of Things (IoT) with an advanced Machine Learning (ML) framework to optimize water usage while ensuring sustainability. The proposed model employs an ensemble of Decision Tree Classifier (DTC) and Random Forest Classifier (RFC) algorithms to analyse critical environmental parameters, including soil moisture, temperature, pH value, and soil variants. Utilizing the meticulously processed Great Time dataset for training and evaluation, the model demonstrates exceptional applicability across diverse agricultural scenarios. Traditional irrigation models exhibit lower accuracy and limited adaptability to varying environmental conditions, creating a need for more robust and efficient approaches. Addressing this gap, the IoT-enabled system leverages real-time data from connected devices and advanced analytics to adapt to dynamic environmental changes. By offering precise irrigation scheduling, the proposed framework promotes resource-efficient water usage, contributing to sustainable farming practices. The ensemble model achieved an impressive accuracy of 98.7%, significantly outperforming conventional methods while maintaining computational efficiency. This study highlights the strength of combining IoT and ML to advance agricultural practices. Experimental outcomes emphasize the scalability, robustness, and reliability of the proposed model, presenting it as a viable solution to tackle water scarcity challenges and enhance crop productivity sustainably.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Detection of Influential nodes using Hybrid Deep learning methods in IIOT environment]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0016</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0016</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Influential nodes in an Industrial Internet of Things (IIoT) environment using Beluga Whale Optimization Algorithm (BWO) integrated with Residual Long Short-Term Memory (LSTM) networks. In IIoT networks, identifying influential nodes is crucial for optimizing data transformation, minimizing latency, and improving overall network effectiveness. The proposed method leverages the exploration and exploitation capabilities of the Beluga Whale Optimization (BWO) Algorithm to optimize the parameters of an Recurrent Long Short-Term Memory (RLSTM) model, which is used to predict the behavior of nodes and identify key influencers within the network. The integration of BWO with RLSTM helps improve the accuracy of node predictions by dynamically adjusting the RLSTM’s hyperparameters based on the network’s evolving data. Extensive experiments conducted in a simulated IIoT environment highlight the performance of the proposed model in enhancing prediction accuracy, reducing computational overhead, and improving network efficiency compared to traditional methods. The results highlight the potential of this hybrid optimization technique for real-time applications in smart manufacturing, predictive maintenance, and other IIoT-driven sectors.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Reconceiving the Edge Intelligence Based IoT Devices for an effective Classification of ECG Systems]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0013</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0013</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The rise of Internet of Things (IoT)-enabled healthcare systems has led to the development of real-time, intelligent solutions for medical data processing, especially in electrocardiogram (ECG) signal classification. As wearable IoT devices for health monitoring become more widespread, there is increasing demand for an effective method that can run on edge devices, ensuring low-latency processing and real-time decision-making. This research introduces a novel method that integrates Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) networks for precise ECG classification. The model utilizes CNNs for spatial feature extraction from raw ECG signals, identifying key patterns like P-waves, QRS complexes, and T-waves, which are essential for accurate classification but challenging to extract manually. To capture the sequential nature of ECG signals, the model incorporates LSTM layers, which are effective at retaining long-range dependencies and recognizing patterns indicative of cardiovascular conditions. The system is trained and validated using ECG data collected from IoT-enabled wearable sensors, ensuring real-world applicability in edge computing environments. The model is designed to handle the constraints of edge devices, such as limited computational power, while maintaining high classification accuracy. The hybrid CNN-LSTM model achieves a 99% accuracy rate, surpassing existing Machine Learning (ML) models in terms of sensitivity, specificity, and overall performance. This approach offers a promising direction for integrating AI-based analytics into IoT-driven healthcare systems, enabling real-time, accurate decision-making for early diagnosis and intervention. It enhances IoT healthcare systems' scalability and practicality, improving patient monitoring and cardiovascular health outcomes.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[To Design and Develop the Hybrid Blockchain Enabled IoT System for Secured Industry 4.0 Systems]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0014</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0014</guid>
            <pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The rapid evolution of Industry 4.0 has catalyzed the integration of advanced technologies to ensure secure, efficient, and scalable industrial systems. This study introduces a Hybrid Blockchain-Enabled IoT System tailored for secured Industry 4.0 environments. By combining the decentralized and immutable nature of blockchain technology with the real-time data acquisition and analytics capabilities of the Internet of Things (IoT), the proposed framework addresses critical challenges such as data integrity, cybersecurity, and system scalability. The IoT devices serve as edge nodes, collecting and transmitting data securely through a private blockchain network to mitigate risks of tampering and unauthorized access. The system leverages a hybrid consensus mechanism to balance computational efficiency with security. For evaluation, real-time industrial data, including sensor readings, machine states, and environmental parameters, were utilized to assess the system's performance in various industrial scenarios. The results demonstrate robust security, low latency, and seamless interoperability across connected devices. Compared to traditional centralized models, the proposed system offers superior resilience against cyber threats while maintaining high throughput and reliability. This research underscores the potential of merging blockchain and IoT technologies to pioneer secure and efficient Industry 4.0 systems, paving the way for sustainable and trustworthy industrial automation.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[A Cognitive IoT Learning Models for Agro Climatic Estimation Aiding Farmers in Decision making]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0004</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0004</guid>
            <pubDate>Sat, 15 Jun 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[

climate change continues to be an impact for every nation’s agricultural system, forecasting it is regarded as one of the most significant economic factors. For farmers to survive the increasing frequency of extreme weather events that have a detrimental effect on agricultural production, climate data and services are essential. Weather forecasts are essential for agricultural resource management because they help farmers prepare ahead of time and safeguard their crops from natural calamities. Furthermore, climate data has been fuelled by global warming, resulting in unexpected hurricanes that have even harmed agriculture’s production roots. These days, the daily forecasting of weather variables, such as rainfall, maximum temperature, and humidity, is primarily done using artificial intelligence, machine learning, and deep learning approaches. The current climate condition models require more innovation in terms of high performance and computational complexity. This study suggests Harris Hawk Optimised deep learning network and ensemble residual Long Short-term memory (R-LSTM) for climatic condition prediction that supports an improvement in crop-yield output. The climate parameter is used to train the proposed model, which is then assessed using the several state-of-the-art learning techniques and performance metrics like accuracy, precision, recall, specificity, and F1-score. The results show that the suggested model has a 97.3% accuracy rate, a 96.9% precision rate, a 96.6% recall rate, and a 97.4% F1-score. The results of the current study show that the suggested model is a very good choice for predicting climate change. By increasing crop output productivity, this in turn significantly contributes to raising farmers’ standard of living.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Investigation of Deep Learning Models for Analysis of Heart Disorders in Smart Health Care based IoT Environment]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0001</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0001</guid>
            <pubDate>Sat, 15 Jun 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[

Heart disorders are a crucial global health issue, requiring effective and precise diagnostic mechanisms for early identification and timely intervention. Traditional healthcare systems face challenges such as delayed diagnosis, insufficient real-time monitoring, and difficulty in processing large volumes of sequential cardiovascular data. Existing machine learning models often struggle with capturing temporal dependencies in data and addressing issues like data noise and computational efficiency on resource-constrained IoT devices. To overcome these limitations, this research investigates the use of Gated Recurrent Units (GRU), a deep learning model known for its ability to handle sequential data effectively, for heart disorder analysis in a smart healthcare environment powered by the Internet of Things (IoT). IoT-enabled devices, such as wearable sensors, facilitate real-time data collection, then it is processed by the GRU model for accurate prediction of heart disorders. Experimental evaluations on datasets such as UCI, Framingham, Public Health, and real-time IoT data demonstrate that the proposed framework achieves superior performance with 99% prediction accuracy. By addressing challenges like data noise, energy efficiency, and privacy concerns, the framework offers a resilient, scalable, and real-time solution for heart disorder diagnosis, advancing personalized and proactive healthcare solutions.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Predictive IoT-AI Model for Cyber Threat Detection in Smart Healthcare Environments]]></title>
            <link>https://sciendo.com/article/10.2478/jsiot-2024-0002</link>
            <guid>https://sciendo.com/article/10.2478/jsiot-2024-0002</guid>
            <pubDate>Sat, 15 Jun 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[

The rise of IoT-based smart healthcare environments has escalated the demand for robust and efficient cyber threat detection mechanisms, given the critical nature of these systems and their susceptibility to evolving cyber-attacks. This research presents the design of a predictive IoT-AI approach for cyber threat detection, leveraging the NSL-KDD dataset for comprehensive training and evaluation. Moreover, the existing methods demonstrates excessive computational time due to its high complexity in feature extraction and sequence modeling. In this study, the proposed model combines Convolutional Neural Network (CNN) layers for spatial feature extraction with a Gated Recurrent Unit (GRU) in the intermediate layers to capture temporal patterns and evolving threat behaviours. The combination of CNN and GRU utilizes the benefits of both models: CNNs for precise feature representation and GRUs for sequence modeling, thereby enabling the identification of sophisticated and emerging cyber threats. This hybrid architecture is optimized to attain high accuracy while retaining computational efficiency, ensuring real-time applicability in IoT-enabled healthcare systems. Through meticulous design and rigorous testing, the proposed algorithm achieved an impressive accuracy of 98%, underlining its capability to effectively and reliably detect a broad spectrum of cyber threats. The experimental results not only validate the efficacy of the CNN-GRU hybrid model but also highlight its scalability and robustness in real-world healthcare IoT applications. This exceptional accuracy underscores the model’s ability as a practical and dependable solution for safeguarding sensitive patient data and critical medical infrastructures against evolving cybersecurity challenges.
]]></description>
            <category>ARTICLE</category>
        </item>
    </channel>
</rss>