Have a personal or library account? Click to login
DefenseFea: An Input Transformation Feature Searching Algorithm Based Latent Space for Adversarial Defense Cover

DefenseFea: An Input Transformation Feature Searching Algorithm Based Latent Space for Adversarial Defense

Open Access
|Feb 2024

Abstract

Deep neural networks based image classification systems could suffer from adversarial attack algorithms, which generate input examples by adding deliberately crafted yet imperceptible noise to original input images. These crafted examples can fool systems and further threaten their security. In this paper, we propose to use latent space protect image classification. Specifically, we train a feature searching network to make up the difference between adversarial examples and clean examples with label guided loss function. We name it DefenseFea(input transformation based defense with label guided loss function), experimental result shows that DefenseFea can improve the rate of adversarial examples that achieved a success rate of about 99% on a specific set of 5000 images from ILSVRC 2012. This study plays a positive role in the further investigation of the relationship between adversarial examples and clean examples.

DOI: https://doi.org/10.2478/fcds-2024-0002 | Journal eISSN: 2300-3405 | Journal ISSN: 0867-6356
Language: English
Page range: 21 - 36
Submitted on: Jan 13, 2023
Accepted on: May 16, 2023
Published on: Feb 16, 2024
Published by: Poznan University of Technology
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Zhang Pan, Cao Yangjie, Zhu Chenxi, Zhuang Yan, Wang Haobo, Li Jie, published by Poznan University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.