引用本文
  • 彭婷婷,王竹,于爱民.面向模型窃取的侧信道攻击与防御综述[J].信息安全学报,已采用    [点击复制]
  • Peng Tingting,Wang Zhu,Yu Aimin.Model Extraction Oriented Side-Channel Attacks and Defenses: A Survey[J].Journal of Cyber Security,Accept   [点击复制]
【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

过刊浏览    高级检索

本文已被:浏览 1425次   下载 0  
面向模型窃取的侧信道攻击与防御综述
彭婷婷, 王竹, 于爱民
0
(中国科学院信息工程研究所)
摘要:
随着模型的轻量化和人工智能(Artificial Intelligence, AI)芯片的发展,机器学习,特别是深度学习模型开始部署和实现在各类嵌入式设备中,并广泛应用于智能分类、故障诊断等特定工业智能任务。高效准确的模型需要耗费高昂的训练成本,受到知识产权的保护,而旨在获取其架构与参数的模型窃取也成为极具吸引力的一项工作。模型窃取不仅可以逆向出模型本身相关的敏感数据,还有可能会进一步导致原始训练数据的泄露。侧信道攻击就是一种针对嵌入式AI模型逆向行之有效的方法,与其他模型窃取方法相比,它不仅能够绕过常规的固件逆向分析防护,也不会因模型查询次数限制而受到影响,成为最新研究热点。本文围绕面向模型窃取的侧信道攻击,首先介绍嵌入式AI模型的泄露原理,然后从目标硬件平台类型、攻击者掌握目标模型知识情况、侧信息利用类型、模型窃取思路等多个维度对现有攻击进行分类,结合攻击效果对该领域近年来的代表性成果进行分析归纳,并总结相应的侧信道防御方法。最后,展望未来潜在的研究方向,为安全的模型设计、分析与评估提供依据。
关键词:  模型窃取  侧信道攻击  防御措施  人工智能安全
DOI:
投稿时间:2024-06-03修订日期:2024-09-05
基金项目:国家重点研发计划(NO.2022YFB3103800)
Model Extraction Oriented Side-Channel Attacks and Defenses: A Survey
Peng Tingting, Wang Zhu, Yu Aimin
(Institute of Information Engineering, Chinese Academy of Sciences)
Abstract:
Machine learning, particularly deep learning models, are extensively applied in numerous Artificial Intelligence (AI) tasks including image classification, object detection, and natural language processing. With the advancement of AI chip technology and the trend towards model lightweighting, neural networks have begun to be deployed and implemented in diverse embedded devices, or for software development tailored to intelligent classification, fault diagnosis and other specific industrial intelligence tasks. High-performance and accurate models necessitate significant investment in training resources and are protected by intellectual property rights. This has made the acquisition of model architecture and parameters through model extraction a highly attractive endeavor. Model extraction can not only reverse-engineer sensitive data related to the model itself but also potentially lead to the leakage of original training data. Among various extraction approaches, side-channel attacks have proven to be a particularly effective way for reverse engineering embedded AI models. These attacks, distinct from other model extraction methods, can circumvent traditional firmware reverse analysis protections and are unaffected by limitations on model query frequencies, positioning them as a prominent area of current research. In this paper, we delve into side-channel attacks aimed at model extraction, initially presenting the basic leaking principles of embedded AI models. Then we proceed to categorize existing attacks by several criteria: the type of target hardware platform, the extent of the attacker's knowledge about the model, the types of side information leveraged, and the strategies employed in the attack process. According to the above attack classification and attack effects, a detailed review and synthesis of advancements in this area are provided by means of integrating and analyzing representative findings from recent research. Furthermore, the corresponding side-channel defense strategies are summarized. Finally, we look forward to potential future research directions, aiming to support the secure design, analysis, and evaluation of models, thereby promoting stronger security protocols in AI implementations.
Key words:  model extraction  side-channel attacks  defenses  AI security