Physical and Intrinsic Security of Embedded Neural Networks

PICTURE project (01-2021 // 07/2024)

The deployment of Machine Learning models is an essential evolution of Artificial Intelligence, predominantly by porting deep neural networks for inference purpose. Security is one of the major brakes on this large-scale deployment. 

An important body of works raise major threats that could have disastrous impacts on the development of ML-based systems such as adversarial examples, poisoning attacks, model extraction or membership inferences. However, the attack surface of embedded models cannot be limited to the algorithmic vectors and must encompass the specific features of the hardware platforms with physical threats such as Side-Channel (SCA) or Fault Injection Analysis (FIA). 

By bridging the algorithmic and physical world as an overall attack surface, PICTURE aims at analyzing the threats targeting the confidentiality, integrity and availability of software embedded neural network models and develop robust protections and evaluation tools.


PICTURE is a French funded ANR project selected in 2020 (ANR-AAPG2020)PICTURE is coordinated by Pierre-Alain Moëllic (CEA-LETI). The consortium gathers CEA-LETI, Mines Saint-Etienne, STMicroelectronics and IDEMIA.

PICTURE started on 5th February 2021 for 42 months. 

 


PICTURE features:

Financing: Partially funded by ANR through AAPG 2020.

ANR referenceANR-20-CE39-0013.

Coordinator: Pierre-Alain Moëllic (CEA-LETI)

Consortium: CEA-LETI, Mines Saint-Etienne, STMicroelectronics and IDEMIA.

Duration: 42 months

Scientific start (T0): 05 Feb 2021