|
Garcia and Jacobs (2010)
Fontaine and Galand (2007)
Gomathisankaran et al. (2013)
Chattopadhyay and Boult (2007)
| Private information leakage (Public dataset privacy) (User data privacy) | Cryptography | Private information (medical image, lifestyle, financial information, face, private location, biometric information, disease information), Human activity (daily life activity, movement) |
|
Butler et al. (2015)
Dai et al. (2015)
Ryoo et al. (2017)
Ren et al. (2018)
Winkler et al. (2014)
Speciale et al. (2019)
| Private information leakage (Public dataset privacy)(User data privacy) | Anonymized videos |
|
|
Garcia Lopez et al. (2015)
| Private information leakage from database (Public dataset privacy)(User data privacy) | Local processing |
|
|
Liu (2019) Bun and Steinke (2016)
| Information leakage from large-scale database (Public dataset privacy) | Differential privacy |
|
|
Bian et al. (2020)
| Information leakage in visual recognition(Public dataset privacy)(Training data privacy) | Secure inference by homomorphic encryptionSecret sharingHomomorphic convolution |
|
|
Iwasawa et al. (2017)
Ajakan et al. (2015)
Edwards and Storkey (2016)
Malekzadeh et al. (2018, 2019)
Osia et al. (2020)
| Information disclosure by unintentional discriminating of user information during deep learning (Training data privacy) | Adversarial training |
|
|
Zhang et al. (2019)
| Adversarial training which is effective on particular sensitive attributes (Training data privacy) | Image style transformation |
|
|
Phan et al. (2016)
Abadi et al. (2016)
Papernot et al. (2017)
| Information leakage during deep learning (Training data privacy) | Differential privacy |
|
|
Tramèr et al. (2016)
Wang and Gong (2018)
Juuti et al. (2019)
Kariyappa and Kariyappa (2019)
| Information leakage during deep learning (Model privacy) | Analyze attacker’s queries, Defense against attacks |
|