Have a personal or library account? Click to login
Chat-GPT Powered IoT devices using regularizing the data for an efficient management systems Cover

Chat-GPT Powered IoT devices using regularizing the data for an efficient management systems

By: Shilpa Patil and  T. Anne Ramya  
Open Access
|Feb 2025

Figures & Tables

Figure 1:

Proposed Methodoloy
Proposed Methodoloy

Figure 2:

Sample Multi-Channel FECG Datasets utilised for Training and Testing the Module
Sample Multi-Channel FECG Datasets utilised for Training and Testing the Module

Figure 3:

T5 Architecture
T5 Architecture

Figure 4 :

performance metrics compared with other models
performance metrics compared with other models

Figure 5:

Comparative assessment of Distinct models
Comparative assessment of Distinct models

Mathematical Formulas for the Evaluation Metrics’ Computation

SL.NOEvaluation MetricsMathematical Expression
01AccuracyTP+TNTP+TN+FP+FN
02Sensitivity or recallTPTP+FN×100
03SpecificityTNTN+FP
04PrecisionTNTP+FP
05F1-Score2.Precison*RecallPrecision+Recall

Specification of the FECG Datasets

S1.NoRecording CharacterizationSpecification
1Recording Period38 to 41 Weeks of Gestation
2Signals From Maternal Abdomen04
3Types of ElectrodesAg-AgCl Electrode
4Bandwidth1Hz-150Hz
5Filtering TypeDigital filtering
6Sampling Rate1KHz
7Resolution16 bits
8Total Number of Datasets5089

Parameters of T5 Model

ParameterDescriptionValue
Model SizeNumber of parameters in the modelT5-Small
Input LengthMaximum sequence length for input text.512 tokens
Output LengthMaximum sequence length for output text.128 tokens
Vocabulary SizeSize of the token vocabulary (default: 32,000 for T5).32,000
Number of LayersNumber of encoder and decoder layers in the model.6 encoder, 6 decoder
Hidden SizeSize of the hidden representation in the encoder/decoder (e.g., 512 for T5-Small).512
Feed-Forward SizeSize of the feed-forward network in each transformer block.2048
Number of Attention HeadsNumber of attention heads in the self-attention framework.8
Dropout RateDropout probability applied to attention weights and feed-forward layers.0.1
Positional EmbeddingsFixed sinusoidal embeddings used for positional information.Yes
OptimizerAlgorithm used for optimization (e.g., Adam, Adafactor).Adafactor
Learning RateInitial learning rate for training.0.001
Training StepsTotal number of steps for training.~10,000 steps
Batch SizeNumber of samples processed in one forward/backward pass.32
Weight InitializationMethod for initializing model weights (e.g., Xavier initialization).Xavier Initialization
Pre-trained TasksText-to-text tasks the model has been trained on (e.g., translation, summarization).Summarization, classification
Fine-tuning TasksDownstream tasks for which the model can be fine-tuned.FHR classification, FECG signal processing
Language: English
Page range: 179 - 191
Submitted on: Oct 6, 2024
Accepted on: Nov 4, 2024
Published on: Feb 24, 2025
Published by: Future Sciences For Digital Publishing
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2025 Shilpa Patil, T. Anne Ramya, published by Future Sciences For Digital Publishing
This work is licensed under the Creative Commons Attribution 4.0 License.