Abstract
The aim of this work is to find a compromise between the accuracy of reproducing the behaviour of a nominal (Wiener-type) object examined in a laboratory under noise-free conditions and its robustness to intentional external attacks disrupting the input signal. By linearizing the model at the operating points and replacing the computationally expensive minimax optimization criterion with a simpler one, we construct a technique that leads to models robust to adversarial attacks of bounded intensity. Simulation experiments demonstrate the robustness of the obtained models against adversarial disruptions, highlighting the method’s potential applications in fields requiring high resilience, such as control systems and safety-critical environments.