TIS’19- Proceedings of ACM Workshop on Theory of Implementation Security WorkshopFull Citation in the ACM Digital Library
SESSION: Session 1
Computing platforms sometimes provide hardware support for enclaves or trusted execution environments. On such platforms, security critical code can execute in an enclave or secure world, isolated from all other software on the platform.
But the past few years, several successful side-channel attacks have been developed that break, or at least significantly weaken the isolation that these mechanisms offer. Particularly dangerous are software-based side-channels, as they can be launched even without physical access to the computing platform. These attacks often exploit architectural or micro-architectural features of the platform, like caches, paging, speculative execution, or interrupts. Extending a platform with new features, brings a risk of introducing new such side-channel attacks.
In this talk, we will discuss an approach to formally prove the security of platform extensions against these side-channel attacks, and instantiate that approach in a simple setting. More specifically, we will consider the concrete case of extending a microprocessor that supports enclaved execution with support for securely interrupting these enclaves. The base platform we start from is the Sancus system. We show how making enclaves interruptible can lead to new side-channel attacks that weaken the isolation properties of enclaves. We then discuss how to formalize security against such attacks, and discuss the design and implementation of an interrupt handling mechanism for Sancus that is provably secure in the sense that it does not introduce new side-channel attacks.
SESSION: Session 2
While error correcting codes (ECC) have the potential to significantly reduce the failure probability of post-quantum schemes, they add an extra ECC decoding step to the algorithm. Even though this additional step does not compute directly on the secret key, it is susceptible to side-channel attacks. We show that if no precaution is taken, it is possible to use timing information to distinguish between ciphertexts that result in an error before decoding and ciphertexts that do not contain errors, due to the variable execution time of the ECC decoding algorithm. We demonstrate that this information can be used to break the IND-CCA security of post-quantum secure schemes by presenting an attack on two round 1 candidates to the NIST Post-Quantum Standardization Process: the Ring-LWE scheme LAC and the Mersenne prime scheme Ramstake. This attack recovers the full secret key using a limited number of timed decryption queries and is implemented on the reference and the optimized implementations of both submissions. It is able to retrieve LAC’s secret key for all security levels in under 2 minutes using less than $2^16 $ decryption queries and Ramstake’s secret key in under 2 minutes using approximately $2400$ decryption queries. The attack generalizes to other lattice-based schemes with ECC in which any side-channel information about the presence of errors is leaked during decoding.
Masking is the best-researched countermeasure against side-channel analysis attacks. Even though masking was introduced almost 20 years ago, its efficient implementation continues to be an active research topic. Many of the existing works focus on the reduction of randomness requirements since the production of fresh random bits with high entropy is very costly in practice. Most of these works rely on the assumption that only so-called online randomness results in additional costs. In practice, however, it shows that the distinction between randomness costs to produce the initial masking and the randomness to maintain security during computation (online) is not meaningful. In this work, we thus study the question of minimum randomness requirements for first-order Boolean masking when taking the costs for initial randomness into account. We demonstrate that first-order masking can in theory always be performed by just using two fresh random bits and without requiring online randomness. We first show that two random bits are enough to mask linear transformations and then discuss prerequisites under which nonlinear transformations are securely performed likewise. Subsequently, we introduce a new masked AND gate that fulfills these requirements and which forms the basis for our synthesis tool that automatically transforms an unmasked implementation into a first-order secure masked implementation. We demonstrate the feasibility of this approach by implementing AES in software with only two bits of randomness, including the initial masking. Finally, we use these results to discuss the gap between theory and practice and the need for more accurate adversary models.
In this note we study non-completeness, the key property of Threshold Implementations (TIs). TIs have proved to be a popular method for mitigating side-channel leakage of sensitive information in hardware implementations of cryptographic algorithms.
In particular, we provide a reformulation of non-completeness in terms of set coverings with constraints, and describe a strategy for constructing small such coverings for given parameters.
Our obtained results enable the second order secure hardware implementation of algorithms with cubic functions as components, such as AES and inversion in GF(24), without the need for further decompositions.
SESSION: Session 3
The Computer Security Division at the National Institute of Standards and Technology (NIST) is taking steps towards the standardization of threshold schemes for cryptographic primitives. These schemes, applicable to single-device and multi-party implementations, are designed with multiple components in a way to enable essential security properties even when up to a certain threshold number of the components are compromised. This offers a path to mitigate attacks on the implementations and operations of cryptographic primitives, and therefore enhance their security. Properties of interest include secrecy of keys (including resistance against side-channel attacks that exploit leakage from real implementations), integrity of computed values, and availability of operations.
The first part of the talk will overview two introductory elements of the Threshold Cryptography project: the NIST Internal Report on “Threshold Schemes for Cryptographic Primitives – Challenges and Opportunities in Standardization and Validation of Threshold Cryptography” (NISTIR 8214), which advanced a characterization of threshold schemes and positioned several representative questions relevant to the standardization process; and the “NIST Threshold Cryptography Workshop” (NTCW) 2019, which brought together stakeholders to share perspectives from industry, academia and government. These two steps were helpful to reflect and gain insights for developing a roadmap.
The talk will then present a perspective about the challenges and opportunities in moving forward with driving an effort to standardize threshold schemes. The roadmap ahead involves selecting focus areas and items, and will span several phases. Correspondingly, this talk will overview several considerations about criteria to develop for calls for contributions, the balance of flexibility and granularity of possible elements to standardize, and possible ways of collaborating with stakeholders in an open and transparent process towards new standards.
SESSION: Session 4
Threshold Implementations (TI) are provably secure algorithmic countermeasures against side-channel attacks in the form of differential power analysis. The strength of TI lies in its minimal algorithmic requirements. These requirements have been studied over more than 10 years and many efficient implementations for symmetric primitives have been proposed. Thus, over the years the practice of protecting implementations matured, however, the theory behind threshold implementations remained the same. In this work, we revise this theory by looking at the properties of correctness, non-completeness, and uniformity as a composable security model. We prove that this model provides first-order and higher-order univariate security in the glitch-robust probing model which lets us expand the theoretic framework of TI. We first provide a link between uniformity and the notion of non-interference, a known composable security notion building out the probing model. We then relax the notion of non-completeness which helps the design of secure expansion and compression functions. Lastly, we provide generalisations of the threshold notions to allow for general secret sharing schemes and provide examples of how different sharing schemes affect the security and efficiency of the countermeasure.