The rapid evolution of the digital realm has transformed communication, business, and daily life, resulting in complex cyber threats. These hazards pose significant risks to the security of network infrastructure and sensitive data protection. As digital ecosystems grow, instant threat detection and effective intrusion classification have become paramount. Classical conventional intrusion detection systems (IDS), often relying on preset signatures or fixed access rules, struggle to match the dynamic nature of contemporary cyberattacks. To address these obstacles, modern machine learning (ML) models and deep learning (DL) methods have emerged as effective tools for enhancing the accuracy and efficiency of network security systems [1].
Cybersecurity encompasses the protection of systems, networks, and data from unauthorized access, disruption, or harm. IDS are specialized tools that can monitor network traffic for suspicious or malignant activity. In this context, behavioral analysis involves identifying anomalies by examining patterns in user and system behaviors that frequently indicate potential threats. Firewall systems, which are the primary defenses in network security, regulate traffic flow through security protocols. When optimized through behavioral analysis, these firewall systems, known as Dynamic Behavior Firewalls, can adjust to emerging cyber threats and improve real-time efficiency.
Advancements in IDS have been attributed to the incorporation of DL technologies. An optimized neural network for cyber threat detection is one of the most promising models, employing ML models and recognizing intricate, high-dimensional patterns in network traffic data. Despite these advances, challenges remain, including addressing class imbalances in training datasets, managing the computational requirements of large-scale implementations, and ensuring the adaptability of detection models [2]. To overcome these obstacles, there is a growing demand for comprehensive, multifaceted frameworks that enhance not only detection capabilities but also the scalability and robustness of models in practical, real-world scenarios.
This study proposes a complex DL-based framework for real-time intrusion detection and categorization. The proposed framework, which is an Optimized Fully Connected Neural Network for Threat Detection, is included in several essential components to maximize the performance. Initially, network traffic data were inspected and prepared for effective analysis using data-preparation methods. Next, to enhance classification accuracy, the framework selects features using an Agile Correlation-Based Filter (ACBF), which ensures that the most relevant features are selected. The artificial minority data oversampling method (AMDOSM) utilizes the AMDOSM to improve the model’s performance on underrepresented classes in the dataset to address class imbalance issues. Finally, Dynamic Behavior Firewalls, which provide real-time monitoring and prevent unwanted network access, are included in the framework. By detecting and classifying various types of network intrusions, the Optimized Fully Connected Neural Network for Threat Detection model performs exceptionally well, guaranteeing high precision and recall while reducing false positives, thereby improving the dependability and efficacy of IDS [3].
Figure 1 shows a network security system that uses DL and ML models to detect and respond to malignant network activity in the network landscape. It begins by analyzing network traffic data, including IP addresses, port numbers, IP protocols, and payload content. The collected data are preprocessed through Data Wrangling, Feature Selection, and Transformation before being analyzed by DL methods to identify abnormal patterns that show potential cyber threats [4]. The network system can detect various malignant activities such as unauthorized access attempts, malware, botnets, and anomalous traffic. DL uses supervised, unsupervised, and enhanced learning methods to refine its ability to classify traffic and make decisions over time. When a cyber threat is detected, the system sends alerts to network administrators or system administrators through email, system logs, or Security Information and Event Management (SIEM) systems. Network administrators can then take measures, such as blocking or allowing traffic, isolating infected systems, updating cybersecurity measures, or conducting further investigations. The system continues to monitor network traffic in real-time applications by using a feedback loop to enhance the precision and efficacy of threat detection. The key benefits of this approach include high detection precision, automation of threat detection, and real-time analysis, enabling rapid action to evolve into cyber threats [5,6].

DL and ML for anomalous network traffic analysis. DL, deep learning; ML, machine learning.
Input:
R(t) - Raw Network Traffic Data
S(t) - Structured Traffic Data
h(t) = r(t) ∀t ∈ [1, tf] (Preprocessed Network Data)
Preprocessing:
Normalize (Di) - Scale all feature values between (0, 1)
Apply Agile Correlation-Based Filter (ACBF) for Feature Selection
Apply Artificial Minority Data Oversampling Method (AMDOSM) to balance dataset
Training Phase:
Define Neural Network Model (Optimized Fully Connected Neural Network)
Initialize Model Parameters: Weight Matrices W, Biases b
Define Activation Function αM
Set Learning Rate η and Batch Size B
Loop k = 1 to Number of Training Samples:
Forward Propagation:
gM = nM(hM−1) = WM × hM−1 + bM
hM = αM(gM)
Compute Loss Function L
Backpropagation:
Compute Gradient of Loss w.r.t. Weights and Biases
Update Weights: WM = WM - η * ∇L/∇WM
Update Biases: bM = bM - η * ∇L/∇bM
End Training Loop
Real-Time Threat Detection:
While New Network Traffic Data Arrives:
Extract Relevant Features
Normalize Features
Pass Data Through Trained Neural Network
Predict Threat Category
If Threat Detected:
Alert Network Administrators (Email, Logs, SIEM)
Execute Adaptive Security Measures:
- –
Block Malicious Traffic
- –
Isolate Infected System
- –
Update Security Policies
- –
Log Incident for Further Analysis
- –
Update Model Using Feedback Loop
End While
Output:
Categorized Threats with High Detection Precision
Real-Time Monitoring & Mitigation Measures
End Algorithm
This study is organized as follows: Section 2 provides a comprehensive analysis of the existing literature on current IDS, highlighting key advancements and limitations. Section 3 examines the background of Real-Time Threat Identification and Categorization within network frameworks, highlighting the importance and evolution of these technologies [7]. In Section 4, a detailed description of the algorithm, dataset, and evaluation metrics used in this study is presented, thereby providing a comprehensive understanding of the research methodology. Section 5 introduces the proposed framework and highlights a novel approach and its key components. The implementation of the proposed framework is discussed in Section 6, along with practical considerations and challenges. Finally, Section 7 presents the conclusion and highlights the key findings and contributions of the study, and Section 8 provides suggestions for future research directions. By enhancing the security and resilience of digital infrastructures, the proposed Optimized Fully Connected Neural Network for Threat Detection framework significantly contributes to the advancement of society and economic growth.
Table 1 summarizes a comprehensive review of diverse DL approaches for real-time network traffic categorization, intrusion identification, and cyber threat detection. The analyzed studies revealed an increasing trend in utilizing advanced DL architectures, such as long short-term memory (LSTM), convolutional neural networks (CNNs), Autoencoders, and Artificial Neural Networks (ANN) to bolster network security measures [8,9]. A recurring theme of this research is to address the complexities of encrypted, hostile, and unbalanced network traffic data. Numerous studies have presented hybrid models, including CNN-LSTM combinations and cost-effective learning methods, to enhance classification accuracy. Furthermore, innovative models, such as cost-sensitive LSTM (CSLSTM) and cost-sensitive CNN (CSCNN), have demonstrated performance improvements of 4–16% compared to conventional methods [10]. Additionally, frameworks employing sophisticated neural embeddings and feature selection techniques have achieved remarkable accuracy rates, reaching up to 99.71% in certain instances. The meta-analysis suggests that real-time threat detection and anomaly identification are critical. Researchers frequently utilize datasets such as NSL-KDD, CICIDS2017, and UNSW_NB15 to evaluate the model effectiveness. Some approaches utilize interpretability techniques such as SHAP analysis, SIEM integration, and behavioral analysis to enhance decision-making processes [11]. Additionally, optimization strategies, such as transfer learning and model compression, are employed to address scalability and computational efficiency issues in large-scale data environments. Despite significant progress, certain obstacles have persisted. Some models lack real-time behavioral analysis capabilities, mainly focusing on traffic classification rather than on proactive threat mitigation. Although DL techniques enhance detection accuracy, challenges such as high computational requirements, interpretability issues, and adaptability to evolving attack patterns require further investigation. In conclusion, the reviewed literature emphasizes the potential of DL as a powerful tool for network security, providing promising solutions for real-time traffic classification, anomaly detection, and cybersecurity threat identification. Future research should prioritize model explainability, real-time behavioral analysis, and adaptive techniques to develop robust IDS that can operate effectively in dynamic cyber environments.
Meta-analysis
| Author(s) | Year | Key findings | Method used | Advantage | Disadvantage |
|---|---|---|---|---|---|
| Mestry and Rathi | 2022 | Real-time malicious network detection in IoT | CNN-LSTM, CICFlowMeter | High detection accuracy | Needs feature extraction tools |
| Ismard | 2022 | Malicious network traffic detection | DL | Enhances security, reduces economic loss | No categorization or behavioral analysis |
| Islam et al. | 2022 | Framework for secure traffic classification | 1D-CNN, Flow-time-based features | High accuracy in encrypted traffic | – |
| Thirimanne et al.et al. | 2022 | Real-time IDS using DNN | DNN, NSL-KDD dataset | Effective feature extraction | Moderate accuracy |
| Gürfidan et al. | 2023 | ML/DL-based real-time anomaly detection | Blockchain + ML/DL | Enhances detection speed, security | Requires computational resources |
| Rohith Vallabhaneni Srinivas A Vaddadi | 2023 | CNN-RNN-based cyberattack detection | CNN, RNN | Captures spatial and temporal dependencies | High computational cost |
| Hattak et al. | 2023 | IoT intrusion detection using visualized network data | DL | Converts raw data into images | Lacks real-time analysis |
| Dabi Dabouabi Dalo Alionsi | 2023 | AI-driven real-time threat detection in IT networks | ML, DL | Effective for complex networks | Requires continuous updating |
| Liu et al | 2023 | Malicious traffic detection with FlowGAN | FlowGAN, DL | Enhances small sample detection | No threat categorization |
| Mei et al. | 2023 | DL-based anomaly detection | LetNet, LSTM | Robust, high-speed detection | Requires large datasets |
| Sharma et al. | 2023 | Autoencoder-based anomaly detection | Autoencoder | Learns complex patterns | No real-time classification |
| Alguliyev and Shikhaliyev | 2024 | Hybrid CNN-LSTM for network threat classification | CNN, LSTM | High classification accuracy | Requires large labeled datasets |
| Arjunan | 2024 | DL for anomaly detection in big data networks | CNN, LSTM, Transfer Learning | Handles large data volumes | Requires continuous training |
| Cadet et al. | 2024 | AI-powered surveillance threat detection | DL models | Applies to video feeds and sensors | No direct network traffic analysis |
| Faradias Izza Azzahra Faisal, et al. | 2024 | DL for OTT traffic classification | CNN, LSTM, Bi-LSTM | Effective QoS management | No threat categorization |
| Zhao et al. | 2024 | CNN-Focal-based IDS for real-time traffic detection | CNN-Focal | Addresses IDS limitations | Needs SoftMax tuning |
CNN, convolutional neural network; DL, deep learning; DNN, IDS, intrusion detection systems; IoT, LSTM, long short-term memory; ML, machine learning.
The rapid development of the digital world has significantly increased the complexity and frequency of cyber threats, resulting in a significant challenge for traditional IDS. These systems, which rely on signature matching and rule-based frameworks, often fail to address modern and sophisticated cyberattacks. To overcome these limitations, researchers have turned to ML and DL techniques to improve the accuracy, adaptability, and effectiveness of IDS. However, critical challenges persist in this regard. Class imbalance, characterized by a lack of representative attack data, limits the ability of models to effectively learn minority samples. The use of outdated or incomplete datasets further undermines the performance and generalization of IDS models. Real-time performance is another major issue because high computational requirements and scalability constraints hinder practical deployment. Furthermore, existing models cannot adapt to zero-day and dynamic attack patterns without extensive retraining, leaving systems vulnerable to new and evolving threats [12].
Modern cyberattacks pose a significant threat to digital infrastructure, and heliopsis is a significant threat to digital infrastructure, highlighting the urgent need for innovative solutions. The integration of ML and DL techniques into IDS architectures has demonstrated immense potential for overcoming these difficulties. For example, Ashiku and Dagli (2021) proved the effectiveness of DL for enabling real-time anomaly detection in an IDS. Gu and Lu (2021) improved IDS performance through machine-learning techniques and behavior-based threat detection methods. Moreover, Kim et al. (2021) and Akgun et al. (2021) introduced CNN-LSTM and hybrid DNN-CNN-LSTM models, respectively, achieving significant detection capabilities. However, these studies also revealed persistent challenges such as false positives. These findings emphasize the importance of advancing IDS systems to safeguard digital assets against increasingly sophisticated and complex cyber threats [13].
The evaluation of seven DL models across 35 datasets demonstrated their potential to improve the detection rates while effectively addressing false alarms. Similarly, Liu et al., using hybrid ML and DL models, achieved an impressive 99.91% accuracy on widely used datasets such as NSL-KDD and CIC-IDS, thus highlighting the applicability of these models in real scenarios. In addition, a hybrid CNN-BiLSTM model that addresses DDoS and U2R threats in software-defined networks (SDNs) highlights its scalability and efficiency. These studies provide a solid foundation for using DL-based IDS to tackle cyber threats, despite ongoing challenges such as dataset dependency, computational overhead, and adaptability [14].
The proposed research aims to address the challenges faced by IDS through several key initiatives. First, it is anticipated that IDS accuracy can be increased by utilizing models that enhance threat detection rates while minimizing false positives and negatives. To address class imbalance, this study will implement techniques such as synthetic data generation and increase, ensuring accurate training datasets. Furthermore, the study will implement techniques such as synthetic data generation and augmentation, ensuring accurate training datasets. To address dynamic and evolving threats, transfer learning and domain adaptation are employed to enable IDS models to efficiently detect zero-day attacks. Finally, the research focuses on proving the robustness and effectiveness of the proposed IDS models in real-world scenarios to ensure their practical application. These integrated goals will help develop resilient, efficient, and scalable IDS systems capable of meeting the demands of the ever-evolving cybersecurity landscape [15].
System Initialization
Algorithm Initialize_System():
Initialize empty feature selector
Initialize StandardScaler for data normalization
Initialize detection queue with maximum size 1000
Initialize neural network model with following architecture:
Input Layer (features_count nodes)
Dense Layer 1 (128 nodes, ReLU activation)
Batch Normalization
Dropout (0.3)
Dense Layer 2 (64 nodes, ReLU activation)
Batch Normalization
Dropout (0.2)
Dense Layer 3 (32 nodes, ReLU activation)
Batch Normalization
Dense Layer 4 (16 nodes, ReLU activation)
Output Layer (5 nodes, Softmax activation)
Configure model with:
Optimizer: Adam (learning_rate = 0.001)
Loss: Categorical Cross-entropy
Metrics: Accuracy
Feature Selection using Agile Correlation-Based Filter
Algorithm Agile_Correlation_Filter(X, y, threshold):
Initialize empty list selected_features
FOR each feature f in X:
Calculate correlation coefficient r between f and y
IF |r| > threshold:
Add feature index to selected_features
Return selected_features
Class Balancing using Artificial Minority Oversampling
Algorithm Balance_Classes(X, y):
Find majority_class_count
Initialize balanced_X, balanced_y as empty lists
FOR each class in unique classes:
Get samples of current class
IF current_class_count < majority_class_count:
samples_needed = majority_class_count - current_class_count
FOR i = 1 to samples_needed:
Select random sample from current class
Add random noise to create synthetic sample
Add synthetic sample to balanced_X
Add class label to balanced_y
Return balanced_X, balanced_y
Data Preprocessing Pipeline
Algorithm Preprocess_Data(X, y, is_training):
IF is_training:
selected_features = Agile_Correlation_Filter(X, y)
Store selected_features for future use
X = Select only selected_features from X
X = StandardScaler.fit_transform(X)
X, y = Balance_Classes(X, y)
Return X, y
ELSE:
X = Select only stored selected_features from X
X = StandardScaler.transform(X)
Return X
Real-Time Monitoring System
Algorithm Monitor_Network():
WHILE is_monitoring:
IF detection_queue is not empty:
packet = Get next packet from detection_queue
processed_packet = Preprocess_Data(packet)
threat_category = Predict_Threat(processed_packet)
Handle_Threat(threat_category, packet)
Wait for 1 ms to prevent CPU overload
Threat Detection and Classification
Algorithm Predict_Threat(network_data):
processed_data = Preprocess_Data (network_data)
predictions = neural_network. predict(processed_data)
threat_category = argmax(predictions)
Return threat_category
Threat Response System
Algorithm Handle_Threat(threat_category, packet_data):
Initialize threat_categories map:
0: “Normal Traffic”
1: “DDoS Attack”
2: “Port Scanning”
3: “Data Exfiltration”
4: “Malware Communication”
IF threat_category != 0:
Log_Threat(threat_categories[threat_category], packet_data)
Implement_Countermeasures(threat_category)
Countermeasures Implementation
Algorithm Implement_Countermeasures(threat_category):
SWITCH threat_category:
CASE 1: // DDoS Attack
Activate rate limiting
Update firewall rules
Scale resources if needed
CASE 2: // Port Scanning
Block source IP
Increase logging for source
Alert security team
CASE 3: // Data Exfiltration
Block outbound connection
Log data patterns
Alert data security team
CASE 4: // Malware Communication
Isolate affected system
Block C&C communications
Initiate malware scan
Training Process
Algorithm Train_Model(X_train, y_train):
X_processed, y_processed = Preprocess_Data(X_train, y_train, is_training=True)
Configure early_stopping:
monitor = validation_accuracy
patience = 10
restore_best_weights = True
Train model with parameters:
data = (X_processed, y_processed)
validation_split = 0.2
epochs = 100
batch_size = 32
callbacks = [early_stopping]
System Usage Flow
Initialize_System()
Train_Model(training_data)
Start Monitor_Network() in separate thread
FOR each incoming network packet:
Add packet to detection_queue
System automatically:
Processes packets
Detects threats
Implements countermeasures
Periodically retrain model with new data
The proposed DL algorithm for real-time threat identification and classification in network data utilizes innovative methods to improve the efficiency of IDS’ efficiency. The system begins with a multilayered neural network model that utilizes SoftMax in the output layer for classification and ReLU activation in the hidden layers [16,17]. To minimize the computational cost and ensure that the model concentrates on the most relevant qualities, agile correlation-based filtering is employed for feature selection. Artificial minority oversampling (AMOD), which creates synthetic samples for underrepresented classes to ensure balanced training, identifies class imbalances. Through data standardization and class balancing, preprocessing allows the model to effectively manage a variety of network traffic. Implementation of real-time monitoring is a continuous process that preprocesses packets, utilizes the trained model to predict hazards, and applies the correct countermeasures in milliseconds or less, ensuring low latency [18].
Efficiency and adaptability are the two main advantages of the algorithm. Threats such as DDoS attacks, port scanning, data exfiltration, and malware communication are reduced by automated countermeasures that use a DL model to enhance real-time performance. Validation splits for performance assessment and early halting to avoid overfitting are part of the training process [19]. This technology also allows for retraining using fresh data, guaranteeing that it is sensitive to changing risks. With its strong feature selection, real-time adaptability, and actionable reaction mechanisms, this architecture guarantees a thorough coverage of network security requirements and defends against advanced cyber threats.
The proposed method is structured into four distinct layers (Figure 2), which are essential for the threat detection process [20,21]. Layer 1, the Network Layer, provides the foundation for capturing and preprocessing raw network traffic data across various protocols. Layer 2, the ML and DL layers, uses ML algorithms to detect patterns and anomalies, whereas DL models extract complex features to provide precise traffic classification. Layer 3, the Application Layer, utilizes an ACBF to prioritize the most relevant features, thereby enhancing the accuracy and efficiency of threat detection [22]. Finally, Layer 4, the User Support Layer, provides analysis results through an intuitive interface, which allows users to implement countermeasures and prevent future attacks. This multilayered design seamlessly integrates network traffic analysis, ML, DL, and feature selection to provide a comprehensive and effective real-time solution for threat detection and classification, ensuring robust protection against evolving cyber threats [17,18].

Layered architecture for real-time threat detection with ML and DL. DL, deep learning; ML, machine learning.
The Network Layer provides the data acquisition and preprocessing foundation designed to capture raw network traffic in real time. This layer uses network sensors strategically designed to monitor traffic across key protocols such as TCP, UDP, and HTTP. The preprocessing procedures include packet parsing, protocol dissection, and time stamping, ensuring that the raw traffic is properly formatted for subsequent analysis. The population for this layer encompasses all network packets traversing the monitored environment, including benign activities, malicious actors, and unknown sources, thereby capturing a wide range of network traffic. The primary outcomes include successful real-time capture of all incoming and outgoing traffic, and efficient preprocessing to minimize data loss while maintaining integrity [23]. The statistical approaches employed at this level involve descriptive statistics to analyze the volume, types, and frequency of captured traffic across various protocols. Moreover, time-series analysis is employed to identify trends or spikes in network activity that could indicate abnormal behavior or potential threats.
The ML and DL layers integrate advanced ML algorithms, such as Random Forest and Support Vector Machines, alongside DL architectures, such as CNNs and LSTMs, to analyze preprocessed network traffic data [24]. These layers focus on identifying patterns indicative of threats and general network behavior. Training datasets, such as NSL-KDD or CIC-IDS, were utilized to develop predictive models, while real-time traffic data were utilized to detect inferences and anomalies. The population comprised labeled datasets used for training (e.g., known attack signatures and benign traffic) and live traffic data for testing and real-time predictions, ensuring the system’s ability to respond to both known and emerging threats. The key results of this layer include high classification accuracy for distinguishing between malicious and benign traffic and enabling the real-time detection of anomalies based on learned patterns. Statistical approaches include confusion matrix analysis to evaluate model performance metrics such as precision, recall, F1 score, and ROC-AUC curves to evaluate the discrimination abilities of the models.
The Application Layer utilizes an ACBF to refine feature selection, focusing on identifying the most crucial features from network traffic data [25]. This design reduces dimensionality while enhancing the capabilities of predictive models. The features were chosen based on their statistical correlation with specific threats, prioritizing those with the highest level of relevance. The population in this layer encompasses all extracted features from the ML and DL models in the previous layer, each displaying measurable attributes, such as packet size, connection duration, or source IP. The main results include noise reduction in the dataset through effective feature selection, which improves the computational efficiency and highlights the most prominent features for accurate threat prediction [26]. The statistical approach involves calculating correlation coefficients to determine the relationships between features and target outcomes, ranking feature importance using techniques such as recursive feature elimination (RFE) or principal component analysis (PCA), and evaluating the impact of reduced feature sets on model performance metrics such as accuracy and latency.
The ACBF is an inclusive feature selection algorithm that enhances the feature set for cyber threat detection by balancing the precision and dimensionality reduction. It begins by calculating pairwise correlations between features and the target variable using classical correlation coefficient methods [27]. Features with high redundancy were removed by comparing their significance scores relative to the target variable. Moreover, the algorithm employs RFE to train a ML method and eliminate the least prominent features until a predefined feature count is reached. Instead, PCA is employed to transform the features into primary components, selecting those that determine the maximum variance [28]. The performance of both methods is evaluated using metrics such as precision, latency, and feature reduction ratio, and the method with superior performance is preferred. The outcome is an optimized set of key features that can be utilized for efficacy and effectiveness in a real-time cyber threat classification system.
Input:
Features set: Set of all extracted features from ML or DL methods
Target variable: Cyber Threat classification labels
Correlation_threshold: Minimum correlation coefficient threshold
Max_features: Maximum number of features to select
Output:
Selected_features: Optimal set of features for threat detection
// Initialize data structures
selected_features = empty_dataset
feature_scores = empty_map
correlation_matrix = empty_matrix
Function Calculate_Correlation_Matrix():
For each feature_i in features_set:
For each feature_j in features_set:
correlation_matrix[i][j]= Calculate PearsonCorrelation(feature_i, feature_j)
// Calculate the correlation with the target.
feature_scores[feature_i] = CalculatePearsonCorrelation(feature_i, target_variable)
Function RemoveRedundantFeatures():
For each feature_i in features_set:
For each feature_j in features_set:
If i ≠ j AND correlation_matrix[i][j] > correlation_threshold:
If feature_scores[feature_i ] > feature_scores[feature_j]:
Remove feature_j from features_set
Else:
Remove feature_i from features_set
Function Apply_RFE():
model = Initialize_ML_Model()
remaining_features = features_set
While size(remaining_features) > max_features:
Train model on remaining_features
importance_scores = Get Feature Importance(model)
least_important = FindFeatureWithLowestImportance(importance_scores)
Remove least_important from remaining_features
Return remaining_features
Function Apply_PCA():
standardized_data = Standardize_Features(features_set)
principal_components = Calculate_PCA(standardized_data)
explained_variance = Calculate_Explained_Variance(principal_components)
Return Select_Optimal_Components(principal_components, explained_variance)
// Main procedure
Procedure Select_Optimal_Features():
Calculate_Correlation_Matrix()
Remove_Redundant_Features()
rfe_features = Apply_RFE()
pca_components = Apply_PCA()
// Evaluate the performance metrics for both approaches
rfe_metrics = Evaluate_Performance(rfe_features)
pca_metrics = Evaluate_Performance(pca_components)
Select the final feature set based on performance metrics
If rfe_metrics.score > pca_metrics.score:
selected_features = rfe_features
Else:
selected_features = pca_components
// Validate the final selection
performance_metrics = {
“accuracy”: Calculate_Accuracy(selected_features),
“latency”: Measure_Latency(selected_features),
“feature_reduction_ratio”: len(selected_features) / len(features_set)
}
Return selected_features, performance_metrics
End Algorithm
The User Support Layer is intended to convert the analytical outputs into effective insights for security personnel. This layer provides a graphical interface to detect detected threats, categorizes them by type, and prioritizes responses based on severity and potential impact. Most of this layer’s population includes security analysts, network administrators, and system operators who use it to monitor and mitigate threats. Key results include intuitive visualization of threats and related risks and rapid implementation of countermeasures, such as blocking malicious IPs or isolating compromised systems [29]. The statistical approach involves gathering user feedback through surveys to assess the usability and effectiveness of the interface, analyzing response time metrics to measure how quickly analysts can act on alerts, and tracking threat resolution rates to evaluate the efficacy of the recommended countermeasures.
The Integrated Approach utilizes four architectural layers to provide seamless data flow from raw capture to actionable threat intelligence. The study design assessed the performance of the system in handling live network traffic, detecting anomalies, and providing real-time solutions. The population includes simulated environments, utilizing datasets such as NSL-KDD and CIC-IDS, and live enterprise networks to ensure practical applications. Key findings focus on the accuracy of the system in detecting and categorizing threats, its scalability and efficiency in managing high-speed and large-scale networks, and its ability to minimize false positives and negatives compared with baseline systems [30]. The statistical approach involves end-to-end validation through metrics such as throughput, latency, and detection rate and comparative benchmarking against traditional IDS solutions. The Statistical Model identifies bottlenecks within the system, enabling optimization for improved performance [19].
A real-time threat-detection system utilizes a structured, phased approach to ensure efficiency and accuracy. Phase 1 focused on system setup and data collection. The physical and virtual environments were configured and essential software tools for network traffic monitoring were implemented. A data-collection framework was developed to continuously gather data from various networks using privacy and security standards. Network traffic is captured through the configuration of taps or spans, which mirrors traffic and packet capture systems to check and store packets. Real-time stream processing is used to parse and filter data, and key metrics such as flow statistics, protocol information, packet headers, and behavior patterns are extracted for further analysis.
Phase 2 emphasized data preprocessing. The data were analyzed by removing incomplete packets and using missing values through interpolation or default values. The timestamp data were normalized; categorical variables were transformed into numerical formats; and duplicate entries were eliminated. Feature engineering techniques, such as agile correlation-based filtering and feature transformation, are used to ensure that numerical features are scaled and categorical features are encoded. AMOD utilizes synthetic samples for underrepresented classes to ensure a balanced dataset.
Phase 3 focused on the implementation of neural networks. The model architecture comprises an input layer with 41 features, followed by several dense layers with ReLU activation, batch normalization, and dropout for regularization. The output layer uses SoftMax activation for multiclass classification. The model was trained with a learning rate of 0.001, batch sizes of 32 and 100 epochs, and batch size of 0.001. Early stopping was performed to prevent overfitting. The data were categorized into training, validation, and test sets at a 60:20:20 ratio to ensure effective training and evaluation.
In Phase 4, a real-time detection system was used. A packet capture module is initialized to extract instant analysis features, followed by a preprocessing module that normalizes and transforms features in real time. The prediction module processes data in batches and identifies threats using confidence scoring. The threat response system categorizes threats, such as DDoS attacks, port scanning, data exfiltration, malware communication, and normal traffic, taking actions such as updating security system rules, blocking malicious IP addresses, dispatching alerts, and logging incidents.
Phase 5 encompasses the system integration and testing. Data pipeline, detection, and response systems were integrated to provide seamless communication. Performance metrics were defined, and system health checks were proved for monitoring. Unit testing, integration testing, and security testing were performed to verify the system’s functionality, cohesion, and security, ensuring that it meets the performance and security standards.
Phase 6 encompasses performance optimization. The system performance is enhanced by utilizing CPU/GPU usage, memory management, and network throughput. The model was also improved by adjusting the hyperparameters, using model compression techniques, and increasing the inference speed for faster predictions. Regular monitoring is conducted to monitor system performance and resource utilization, and maintenance activities, such as updating models, updating patches, and cleaning the database, are performed.
Phase 7 encompassed documentation and training. Technical documentation is created to describe the system architecture, API references, and configuration guides, while user manuals and troubleshooting guides are developed for end users. A training program is utilized to teach the system configuration and maintenance of administrators, while users are instructed to use the system and process alerts, ensuring a proper response to detected threats.
The system’s success metrics included a detection accuracy of 96.35%, a false positive rate of <1%, and a processing delay of over 100 ms. The system is expected to provide 99.9% uptime, with resource utilization below 80% and response times exceeding 1 s [20].
A performance analysis of the implementation of the real-time threat detection system showed remarkable results across multiple evaluation criteria. The system achieved an overall detection accuracy of 96.35%, with high precision (95.82%), recall (94.73%), and F1 score (95.27%), and a remarkable area under the ROC curve (AUC) of 0.983. The performance breakdown by threat category revealed that DDoS attacks were detected with the highest precision (98.21%) and recall (97.54%), whereas malware communication and data exfiltration also demonstrated impressive performance, demonstrating the system’s ability to accurately categorize various threats. The real-time processing capabilities of the system are efficient, capturing each packet in an average time of 47 ms, with the largest output of 21,276 packets/s. During the peak load, the memory utilization reached 1.8 GB, and the CPU utilization averaged 65%, indicating its effectiveness in high-demand environments.
The ACBF, which is a crucial feature selection technique, significantly altered the system. It reduces the feature set from 41 to 28, maintaining 99.2% of the original information content and reducing the processing time by 31.7%. This improvement increased the model training time by 42.3%. Analysis of feature importance revealed that network flow statistics, protocol properties, and temporal patterns were the most influential factors in threat detection, leading to the overall accuracy of the system.
The system utilizes the AMDOSM, which balances class distributions. Prior to balancing, the normal traffic class dominated the dataset with 62.5% of the samples; however, after balancing, each class was equally represented, ensuring more accurate detection of minority class threats such as malware and data exfiltration. A comparative analysis with existing solutions proved that the proposed system outperformed traditional ML-based IDS and conventional DL methods, achieving a higher accuracy (96.35%) and more efficient handling time (47 ms).
With regard to scalability and resource utilization, the system can manage up to 50,000 concurrent connections with linear scaling and graceful degradation beyond this limit. It provided a remarkable uptime of 99.98% over a 3-month period. The resource consumption of the system remained within acceptable limits, with CPU utilization reaching 83% and memory utilization reaching 2.3 GB under the largest load, ensuring that it can effectively scale to meet enterprise-level security requirements.
The key findings from the implementation include outstanding detection accuracy, effective feature selection, balanced learning, and operational efficiency. The system not only reduced the false positive rate to 0.87% but also provided real-time processing capabilities with minimal latency. Its scalable architecture and effective resource utilization render it suitable for deployment in large-scale environments.
However, the system has some limitations, including its ability to detect zero-day attacks, its dependence on high-quality training data, and the resource-intensive nature of smaller deployments. Future enhancements may include integrating unsupervised learning for zero-day attack detection, federated learning for distributed training, developing light models for edge deployment, and enhancing automated response mechanisms.
The practical implications of this system are significant, particularly in terms of enterprise security, infrastructure resilience, and economic impact. The system provides enhanced protection against cyber threats, reduces response times to security incidents, and reduces the operational security costs. Furthermore, it enhances network infrastructure resilience by adapting to threats, and its successful deployment can result in reduced losses from cyberattacks, lower insurance premiums, and improved stakeholder confidence.
In conclusion, the proposed system satisfies the requirements of real-time threat detection with high accuracy. The combination of an optimized neural network architecture, effective feature selection, and class balancing techniques led to a robust and effective security solution. The ability of the system to process network traffic in real time while providing high-quality accuracy makes it a suitable solution for enterprise-level cybersecurity.
As shown in Figure 3, the implemented system achieved a remarkable detection performance across various evaluation criteria. The overall accuracy of the system was 96.35%, with a precision of 95.82%, a recall of 94.73%, and an F1 score of 95.27%. Additionally, the system had an excellent AUC of 0.983, enabling a strong overall classification performance.

Real-time threat detection and categorization performance in network. AUC, area under the ROC curve.
The system performed exceptionally well across all traffic areas. For DDoS attacks, the system achieved a precision of 98.21%, a recall of 97.54%, and an F1 score of 97.87%. Port scanning detection resulted in a precision of 96.43%, a recall of 94.82%, and an F1 score of 95.62%. The system utilized data exfiltration with a precision of 94.67%, a recall of 93.89%, and an F1 score of 94.28%. Malware communication was detected with a score of 94.12%, a recall of 92.84%, and an F1 score of 93.47%. Finally, normal traffic was accurately categorized with a precision of 97.15%, a recall of 96.58%, and an F1 score of 96.86%. These results demonstrate the robustness of the system in distinguishing between different traffic types, while providing high detection accuracy.
In Figure 4, the line graph provides a detailed breakdown of the threat-detection system across multiple threat categories. The system achieved an impressive overall accuracy of 96.35%, demonstrating its ability to accurately display threats with a prominent level of precision. In addition to accuracy, key metrics such as precision, recall, and F1 score were consistently high across all threats, demonstrating the system’s efficient performance in detecting distinct types of network attacks.

Performance breakdown by threat category.
Precision measures the proportion of correctly displayed threats in all cases identified as threats by the system. The high precision shows that the system is effective in reducing false positives, ensuring that the most threatening threats are legitimate.
Recall assesses a system’s ability to correctly identify all actual network threats. A high recall means that the system can detect most threats present, thereby reducing false negatives.
The F1 score is the harmonic equivalent of precision and recall, and it provides a single metric that combines both. A high F1 score indicates a system balance between accurately displaying threats and minimizing errors.
An AUC value of 0.983 enhances the performance of the system. The AUC measures the system’s ability to distinguish between different classes of data (i.e., it distinguishes threats from non-threat categories). An AUC of 0.983 is considered excellent, indicating that the system has exceptionally high discriminative capability and can effectively differentiate between normal and malicious traffic.
In summary, the high values across all performance metrics (accuracy, precision, recall, F1 score, and AUC) prove that the threat detection system is robust, reliable, and effective in showing and categorizing network threats in real time with minimal errors.
In Figure 5, the bar chart illustrates the importance of each stage in the real-time threat identification and classification processes. Data preprocessing, quantifying 30% of the total effort, is the most pivotal stage, involving the cleaning, transformation, and preparation of unprocessed data analysis. Behavioral analysis followed 25% of the effort, emphasizing the analysis of user behavior patterns to detect potential cyber threats. Threat classification requires 20% of the effort to classify cyber threats based on their severity and type. Real-time actions, which account for 15%, involve immediate measures to mitigate identified cyber threats, while key performance metrics, requiring 10%, assess the efficacy of the overall process. The chart highlights the importance of data preprocessing and behavioral analysis, with instant actions and performance metrics less exhaustive.

Stages of real-time threat identification and categorization.
This study proposes an innovative approach to identifying and classifying real-time threats in network traffic using DL behavioral analysis. By integrating an optimized neural network with advanced data preprocessing, feature selection techniques, and class imbalance handling, the proposed system addresses the challenges in network intrusion detection. The empirical results demonstrate the excellent performance of the system, achieving a 96.35% accuracy rate and surpassing ultramodern methods in terms of precision, recall, and minimal false negatives.
The system’s ability to adapt to evolving attack patterns and provide robust security ensures that it can detect numerous cyber threats while providing high performance, even in high-speed environments. This research is aligned with global objectives, which emphasize the importance of secure and sustainable technological advances in digital infrastructure.
Future work will focus on enhancing this framework by exploring hybrid DL models, integrating transfer learning for improved generalization, and incorporating explainable AI (XAI) techniques to ensure transparency and provide actionable insights for cybersecurity professionals. Moreover, efforts will be made to scale the system for deployment in real-time, large-scale networks, ensuring its relevance and effectiveness in addressing ever-increasing and sophisticated cyber threat situations.
In summary, the proposed DL-based framework offers a promising solution to the critical challenge of real-time network threat detection, safeguarding digital assets, and contributing to the creation of secure, reliable, and sustainable technological ecosystems.
The potential for enhancing the proposed DL-based network threat detection system includes several key areas of improvement. One of the main focuses is the development of hybrid DL models that combine CNNs and Recurrent Neural Networks, along with other advanced architectures, such as LSTM networks. This integration enhances the system’s ability to detect spatial and temporal patterns in network traffic, thereby increasing its anomaly detection and overall threat identification accuracy. In addition, ensemble learning techniques will be used to combine multiple models, such as the EFNN-TD framework and hybrid DL models, to use the strengths of each architecture and provide more robust and adaptive detection capabilities. This approach reduces false positives and provides greater detection accuracy across multiple types of cyberattack.
The integration of transfer learning and domain adaptation enables the model to learn from previous attacks and apply this knowledge to detect new, evolving threats, such as zero-day attacks. This enhances the model’s generalization and adaptability across network environments and datasets. In addition, scalability and real-time performance will be prioritized to ensure that the system can handle large-scale, high-speed networks while maintaining effective and timely threat detection. Optimizing the DL model for faster processing and reduced computational costs is essential to ensure the performance of the system in complex network environments.
The implementation of XAI techniques is another crucial step in enhancing the transparency and interpretation of the system’s decision-making process. By providing accurate information about the model’s predictions, XAI will help cybersecurity professionals understand the rationale behind the detected threats, making it easier to take appropriate mitigation measures. Additionally, future work will focus on developing adaptive and autonomous threat mitigation mechanisms that enable the system to respond in real time to identify threats by dynamically adjusting security measures or triggering predefined countermeasures based on the severity of the attack.
Efforts will also be aimed at detecting and mitigating advanced persistent threats (APTs), which are sophisticated long-term attacks intended to infiltrate and impede critical infrastructure. Enhancing a system’s ability to respond to these sophisticated threats will involve the development of more advanced models capable of detecting subtle attack patterns over extended periods. Furthermore, ensuring data privacy and security during the detection process is crucial, particularly when dealing with sensitive network traffic data. Federated learning methods are explored to preserve privacy while still benefiting from the collective intelligence of the network.
Implementing cloud and edge computing will enhance the flexibility and efficiency of the system. By utilizing cloud resources for model training and edge devices for real-time inference, the system can effectively scale in distributed network environments, enabling faster and more efficient threat detection. Finally, real-world deployment and continuous evaluation in live network environments will be crucial for testing the hands-on performance of the system. Extensive testing will assist in determining a model and ensuring its readiness to meet the evolving challenges of modern cybersecurity. Addressing these challenges will enable the proposed threat detection system to become a more resilient, scalable, and adaptive solution capable of addressing the increasingly complex and dynamic nature of cyber threats.
The proposed DL-based network threat detection system, while promising, is a challenge. First, the reliance on large labeled datasets for training poses a challenge, particularly in detecting emerging or zero-day attacks. Although techniques such as oversampling address class imbalance, rare attack types may still enhance the performance. Furthermore, the computational resources required for real-time detection in high-speed large-scale networks may limit scalability, particularly in resource-constrained environments. The system also struggles to adapt to new, previously unseen attack techniques, which require retraining for effective zero-day attack detection.
Moreover, while the implementation of XAI is beneficial for transparency, the complexity of DL models may still be difficult for nonexperts to understand the system’s decision-making process. The use of federated learning for privacy protection presents obstacles to communication overhead and model convergence across decentralized sources. Finally, the cloud and edge computing infrastructure enhances scalability but introduces potential vulnerabilities and data privacy concerns, making its deployment and maintenance more complex. To improve the robustness and applicability of the system in real-world situations, it is crucial to address these limitations.
