You are here : Home > Two reviews accepted at WF-IOT 2021

news | New technologies

PICTURE present at the 7th IEEE Wolrd Forum on Internet of Things


PICTURE consortium is pleased to participate in the 7th IEEE World Forum on Internet on Things with two review papers that highlight physical threats against embedded software neural network models, such as side-channel analysis for confidentiality issues and laser fault injection analysis against model integrity. These two works is a joint research action betwenn CEA LETI and Mines Saint-Etienne.

Published on 3 May 2021

When dealing with embedded neural netwrok models, the attack surface can hardly be limited to algorithmic flaws but must encompass physical threats. In these two reviews, we focus on the link between side-channel analysis and confidentiality; and laser fault injection and integrity-based threats. 

Mathieu Dumont, Pierre-Alain Moellic, Raphael Viera, Jean-Max Dutertre, Rémi Bernhard, An Overview of Laser Injection against Embedded Neural Network Models, to appear in 7th World Forum on Internet on Things (WF-IOT), 2021. 

ARXIV Version

AbstractFor many IoT domains, Machine Learning and more particularly Deep Learning brings very efficient solutions to handle complex data and perform challenging and mostly critical tasks. However, the deployment of models in a large variety of devices faces several obstacles related to trust and security. The latest is particularly critical since the demonstrations of severe flaws impacting the integrity, confidentiality and accessibility of neural network models. However, the attack surface of such embedded systems cannot be reduced to abstract flaws but must encompass the physical threats related to the implementation of these models within hardware platforms (e.g., 32-bit microcontrollers). Among physical attacks, Fault Injection Analysis (FIA) are known to be very powerful with a large spectrum of attack vectors. Most importantly, highly focused FIA techniques such as laser beam injection enable very accurate evaluation of the vulnerabilities as well as the robustness of embedded systems. Here, we propose to discuss how laser injection with state-of-the-art equipment, combined with theoretical evidences from Adversarial Machine Learning, highlights worrying threats against the integrity of deep learning inference and claims that join efforts from the theoretical AI and Physical Security communities are a urgent need. 


Raphael Joud, Pierre-Alain Moellic, Rémi Bernard, Jean-Baptiste Rigaud, A Review of Confidentiality Threats Against Embedded Neural Network Models, to appear in 7th World Forum on Internet on Things (WF-IOT), 2021. 

ARXIV Version

Abstract: Utilization of Machine Learning (ML) algorithms, especially Deep Neural Network (DNN) models, becomes a widely accepted standard in many domains more particularly IoT-based systems. DNN models reach impressive performances in several sensitive fields such as medical diagnosis, smart transport or security threat detection, and represent a valuable piece of Intellectual Property. Over the last few years, a major trend is the large-scale deployment of models in a wide variety of devices. However, this migration to embedded systems is slowed down because of the broad spectrum of attacks threatening the integrity, confidentiality and availability of  embedded models. In this review, we cover the landscape of attacks targeting the confidentiality of embedded DNN models that may have a major impact on critical IoT systems, with a particular focus on model extraction and data leakage. We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model’s confidentiality can be compromised. Input data, architecture or parameters of a model can be extracted from power or electromagnetic observations, testifying a real need from a security point of view.

Top page

Top page