You are here : Home > Presentation > PRESENTATION

PICTURE Project

PRESENTATION

Published on 1 February 2021

Physical and Intrinsic Security of Embedded Neural Networks



PICTURE is a 42 months French funded project, selected by the ANR (National Agency for Research) through the call AAPG 2020 (ANR-20-CE39-0013). The project started research activities dedicated to the security of embedded artificial intelligence, the 1st February 2021 for 42 months.  


The project is coordinated by CEA-LETI (Pierre-Alain MOELLIC) and gathers Mines Saint-Etienne, STMicroelectronics and IDEMIA.



CONTEXT

Machine Learning models are expected to be included in numerous devices, which can be hacked with an extensive overall attack surface, because of critical flaws intrinsically related to the Machine Learning algorithms as much as their implementation in physically accessible devices.

Then, the security of embedded ML models can be seen through the two sides of a same coin. On one side, an impressive body of publications raises algorithmic threats that could have disastrous impacts on the development and durability of ML models by targeting their integrity, confidentiality or accessibility. On the other side, physical attacks (more particularly side-channel – SCA – and fault injection analysis – FIA) for embedded ML models is a relatively new topic with handful of works that have the strong merit to pave the way to extensive experimentation with more realistic and complex frameworks. 

As widely admitted by the Machine Learning and Security communities, these two sides are still handled too separately. With PICTURE, our main hypothesis is the exploitation of a joined attack surface, i.e. by combining algorithmic attacks and physical attacks, in order to optimally analyze the upcoming threats targeting critical ML embedded systems as well as design efficient defense schemes. 



​OUR OBJECTIVES

Our first objective is to demonstrate the criticality of combined algorithmic and physical attacks against realistic embedded neural network models. 

Bridging the two attack surfaces will be one of the most important results of PICTURE. More particularly, FIA would take advantage of optimized perturbations defined by advanced algorithmic integrity-based attack (i.e. adversarial examples), or confidentiality-based attacks (e.g. model extraction or membership inference) may be combined with SCA to exploit critical leakages about both the model or the training data. 

Our second objective is to propose sound protections by evaluating the efficiency of physical countermeasures combined with state-of-theart defenses against algorithmic attacks. 

Last but not least, considering the widespread deployment strategy of ML models for a large variety of domains and devices, we aim at disseminating good practices for embedded ML practitioners that could be the base for future certification schemes.


PICTURE has been selected by the French SGDSN (Secretariat-General for National Defence and Security) and will be followed by ANSSI (French National Cybersecurity Agency).