Paper 2022/1247

Peek into the Black-Box: Interpretable Neural Network using SAT Equations in Side-Channel Analysis

Trevor Yap, Nanyang Technological University
Adrien Benamira, Nanyang Technological University
Shivam Bhasin, Nanyang Technological University
Thomas Peyrin, Nanyang Technological University
Abstract

Deep neural networks (DNN) have become a significant threat to the security of cryptographic implementations with regards to side-channel analysis (SCA), as they automatically combine the leakages without any preprocessing needed, leading to a more efficient attack. However, these DNNs for SCA remain mostly black-box algorithms that are very difficult to interpret. Benamira \textit{et al.} recently proposed an interpretable neural network called Truth Table Deep Convolutional Neural Network (TT-DCNN), which is both expressive and easier to interpret. In particular, a TT-DCNN has a transparent inner structure that can entirely be transformed into SAT equations after training. In this work, we analyze the SAT equations extracted from a TT-DCNN when applied in SCA context, eventually obtaining the rules and decisions that the neural networks learned when retrieving the secret key from the cryptographic primitive (i.e., exact formula). As a result, we can pinpoint the critical rules that the neural network uses to locate the exact Points of Interest (PoIs). We validate our approach first on simulated traces for higher-order masking. However, applying TT-DCNN on real traces is not straightforward. We propose a method to adapt TT-DCNN for application on real SCA traces containing thousands of sample points. Experimental validation is performed on software-based ASCADv1 and hardware-based AES\_HD\_ext datasets. In addition, TT-DCNN is shown to be able to learn the exact countermeasure in a best-case setting.

Metadata
Available format(s)
PDF
Category
Implementation
Publication info
Published by the IACR in TCHES 2023
Keywords
Side-channelNeural NetworkDeep LearningProfiling attackInterpretabilitySAT equations
Contact author(s)
trevor yap @ ntu edu sg
adrien002 @ e ntu edu sg
sbhasin @ ntu edu sg
thomas peyrin @ ntu edu sg
History
2023-01-16: last of 2 revisions
2022-09-20: received
See all versions
Short URL
https://ia.cr/2022/1247
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2022/1247,
      author = {Trevor Yap and Adrien Benamira and Shivam Bhasin and Thomas Peyrin},
      title = {Peek into the Black-Box: Interpretable Neural Network using {SAT} Equations in Side-Channel Analysis},
      howpublished = {Cryptology {ePrint} Archive, Paper 2022/1247},
      year = {2022},
      url = {https://eprint.iacr.org/2022/1247}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.