Details

Backdoor Attacks against Learning-Based Algorithms


Backdoor Attacks against Learning-Based Algorithms


Wireless Networks

von: Shaofeng Li, Haojin Zhu, Wen Wu, Xuemin (Sherman) Shen

160,49 €

Verlag: Springer
Format: PDF
Veröffentl.: 29.05.2024
ISBN/EAN: 9783031573897
Sprache: englisch

Dieses eBook enthält ein Wasserzeichen.

Beschreibungen

<div>This book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning.</div><div><br></div><div>Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in this book achieves a balance between the invisibility and the effectiveness of backdoor attacks. In the natural language processing domain, it is difficult to design and insert a general backdoor in a manner imperceptible to humans. Any corruption to the textual data (e.g., misspelled words or randomly inserted trigger words/sentences) must retain context-awareness and readability to human inspectors. This book introduces two novel hidden backdoor attacks, targeting three major natural language processing tasks, including toxic comment detection, neural machine translation, and question answering, depending on whether the targeted NLP platform accepts raw Unicode characters.</div><div><br></div><div>The emerged distributed training framework, i.e., federated learning, has advantages in preserving users' privacy. It has been widely used in electronic medical applications, however, it also faced threats derived from backdoor attacks. This book presents a novel backdoor detection framework in FL-based e-Health systems. We hope this book can provide insightful lights on understanding the backdoor attacks in different types of learning-based algorithms, including computer vision, natural language processing, and federated learning. The systematic principle in this book also offers valuable guidance on the defense of backdoor attacks against future learning-based algorithms.</div><div><br></div>
Introduction.-&nbsp;Literature Review of Backdoor Attacks.-&nbsp;Invisible Backdoor Attacks in Image Classification Based Network&nbsp;Services.-&nbsp;Hidden Backdoor Attacks in NLP Based Network Services.-&nbsp;Backdoor Attacks and Defense in FL.-&nbsp;Summary and Future Directions.<div><br></div>
<b>Shaofeng Li</b> received the B.E. degree in Software Engineering from Hunan University, China, and the M.E. degree in Computer Science from Northeastern University, China, in 2014 and 2017, respectively. He received the Ph.D. degree in Computer Science from Shanghai Jiao Tong University, Canada, in 2022. Starting from 2022, he works as a Post-doctoral fellow with the Department of Mathematics and Theory, Peng Cheng Laboratory. He focuses primarily on the areas of machine learning and security, specifically exploring the robustness of machine learning models against various adversarial attacks. His work has received the ACM CCS Best Paper Award Runner-Up.&nbsp;<div><br></div><b>Haojin Zhu</b> is a Professor with Department of Computer Science and Engineering, Shanghai Jiao Tong University, China. He received his B.Sc. degree (2002) from Wuhan University (China), M.Sc.(2005) degree from Shanghai Jiao Tong University (China), both in computer science and the Ph.D. in Electrical and Computer Engineering from the University of Waterloo (Canada), in 2009. He has published in more than 60 journals, including: JSAC, TDSC, TPDS, TMC, TIFS, TWC, TVT and more than 90 international conference papers, including IEEE S&P, ACM CCS, USENIX Security, ACM MOBICOM, NDSS, ACM MOBIHOC, IEEE INFOCOM, IEEE ICDCS. He received IEEE Fellow (2023), IEEE VTS Distinguished Lecturer (2022), the IEEE ComSoc Asia-Pacific Outstanding Young Researcher Award (2014) for the contribution to wireless network security and privacy, Top 100 Most Cited Chinese Papers Published in International Journals of 2014, Supervisor of Shanghai Excellent Master Thesis, and best paper awards of IEEE ICC 2007, Chinacom 2008 and best paper award runner up for Globecom 2014, WASA 2017, and ACM CCS 2021. He is leading the Network Security and Privacy Protection (NSEC) Lab.<div><br></div><div><b>Wen Wu</b> received the B.E. degree in Information Engineering from South China University of Technology, Guangzhou, China, andthe M.E. degree in Electrical Engineering from University of Science and Technology of China, Hefei, China, in 2012 and 2015, respectively. He received the Ph.D. degree in Electrical and Computer Engineering from University of Waterloo, Waterloo, ON, Canada, in 2019. Starting from 2019, he works as a Post-doctoral fellow with the Department of Electrical and Computer Engineering, University of Waterloo. Currently, he is an associate professor with the Department of Mathematics and Theory, Pengcheng Laboratory. His research interests include millimeter-wave networks and AI-empowered wireless networks.</div><div><br></div><div><b>Xuemin (Sherman) Shen</b> received the Ph.D. degree in electrical engineering from Rutgers University, New Brunswick, NJ, USA, in 1990. He is a University Professor with the Department of Electrical and Computer Engineering, University of Waterloo, Canada. His research focuses on network resource management, wireless network security, Internet of Things, 5G andbeyond, and vehicular ad hoc and sensor networks. Dr. Shen is a registered Professional Engineer of Ontario, Canada, an Engineering Institute of Canada Fellow, a Canadian Academy of Engineering Fellow, a Royal Society of Canada Fellow, a Chinese Academy of Engineering Foreign Member, and a Distinguished Lecturer of the IEEE Vehicular Technology Society and Communications Society.&nbsp; Dr. Shen received the Canadian Award for Telecommunications Research from the Canadian Society of Information Theory (CSIT) in 2021, the R.A. Fessenden Award in 2019 from IEEE, Canada, Award of Merit from the Federation of Chinese Canadian Professionals (Ontario) in 2019, James Evans Avant Garde Award in 2018 from the IEEE Vehicular Technology Society, Joseph LoCicero Award in 2015 and Education Award in 2017 from the IEEE Communications Society, and Technical Recognition Award from Wireless Communications Technical Committee (2019) and AHSN Technical Committee (2013). Dr. Shen is the President of the IEEE Communications Society. He was the Vice President for Technical & Educational Activities, Vice President for Publications, Member-at-Large on the Board of Governors, Chair of the Distinguished Lecturer Selection Committee, Member of IEEE Fellow Selection Committee of the ComSoc. Dr. Shen served as the Editor-in-Chief of the IEEE IoT Journal, IEEE Network, and IET Communications.</div><div><br></div>
This book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning.<div><br></div><div>Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in this book achieves a balance between the invisibility and the effectiveness of backdoor attacks. In the natural language processing domain, it is difficult to design and insert a general backdoor in a manner imperceptible to humans. Any corruption to the textual data (e.g., misspelled words or randomly inserted trigger words/sentences) must retain context-awareness and readability to human inspectors. This book introduces two novel hidden backdoor attacks, targeting three major natural language processing tasks, including toxic comment detection, neural machine translation, and question answering, depending on whether the targeted NLP platform accepts raw Unicode characters.</div><div><br></div><div>The emerged distributed training framework, i.e., federated learning, has advantages in preserving users' privacy. It has been widely used in electronic medical applications, however, it also faced threats derived from backdoor attacks. This book presents a novel backdoor detection framework in FL-based e-Health systems. We hope this book can provide insightful lights on understanding the backdoor attacks in different types of learning-based algorithms, including computer vision, natural language processing, and federated learning. The systematic principle in this book also offers valuable guidance on the defense of backdoor attacks against future learning-based algorithms.</div><div><br></div>
Thorough review of backdoor attacks and their potential mitigations in learning-based algorithms Focus on challenges such as design of invisible backdoor triggers and natural language processing systems Provides the fundamental principles for backdoor detections in federated learning

Diese Produkte könnten Sie auch interessieren:

From Grids To Service and Pervasive Computing
From Grids To Service and Pervasive Computing
von: Thierry Priol, Marco Vanneschi
PDF ebook
96,29 €
Grid Computing
Grid Computing
von: Sergei Gorlatch, Paraskevi Fragopoulou, Thierry Priol
PDF ebook
149,79 €
Autonomic Communication
Autonomic Communication
von: Athanasios V. Vasilakos, Manish Parashar, Stamatis Karnouskos, Witold Pedrycz
PDF ebook
149,79 €