Have a personal or library account? Click to login
Robustness Enhancement of a Dynamic Object Model against Adversarial Attacks Cover

Robustness Enhancement of a Dynamic Object Model against Adversarial Attacks

Open Access
|Dec 2025

Abstract

The aim of this work is to find a compromise between the accuracy of reproducing the behaviour of a nominal (Wiener-type) object examined in a laboratory under noise-free conditions and its robustness to intentional external attacks disrupting the input signal. By linearizing the model at the operating points and replacing the computationally expensive minimax optimization criterion with a simpler one, we construct a technique that leads to models robust to adversarial attacks of bounded intensity. Simulation experiments demonstrate the robustness of the obtained models against adversarial disruptions, highlighting the method’s potential applications in fields requiring high resilience, such as control systems and safety-critical environments.

DOI: https://doi.org/10.61822/amcs-2025-0042 | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 591 - 600
Submitted on: Dec 24, 2024
|
Accepted on: Jul 4, 2025
|
Published on: Dec 15, 2025
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Wojciech Sopot, Paweł Wachel, Grzegorz Mzyk, published by University of Zielona Góra
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.