A Lightweight Transformation Method for Protecting the Privacy of Image Training in Classification.

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As a rapidly growing research field over the past decade, deep learning technology relies on large amounts of data to achieve better performance. However, privacy concerns arise due to the potential leakage of sensitive information in training data. Recent studies shown that deep learning models are susceptible to various privacy attacks, leading to the exposure of their training data. Such data leakage during model training and usage raises privacy concerns for data providers. While several methods, such as homomorphic encryption and knowledge distillation, can be used to protect training datasets, but they always require significant computational power and time. Given that most data providers lack access to powerful computational resources and ample time, we propose a lightweight and effective transformation-based method in this paper to protect training data privacy. Our experimental results show that deep learning models trained on images processed by the block mutation scrambling algorithm can still achieve acceptable accuracy. Additionally, we explored the reasons for the feasibility of this method and conducted a security and privacy analysis of the proposed approach, demonstrating that attackers cannot recover the images from the transformed training set or obtain usable image information.

Article activity feed