WAHC’19- Proceedings of the 7th ACM Workshop on Encrypted Computing & Applied Homomorphic CryptographyFull Citation in the ACM Digital Library
SESSION: Session 1: Standardization of Homomorphic Encryption
On the Feasibility and Impact of Standardising Sparse-secret LWE Parameter Sets for Homomorphic Encryption
In November 2018, the \urlHomomorphicEncryption.org consortium published the Homomorphic Encryption Security Standard. The Standard recommends several sets of Learning with Errors (LWE) parameters that can be selected by application developers to achieve a target security level \( łambda \in \128,192,256\ \). These parameter sets all involve a power-of-two dimension \( n łeq 2^15 \), an error distribution of standard deviation \( σ \approx 3.19 \), and a secret whose coefficients are either chosen uniformly in \( \ZZ_q \), chosen according to the error distribution, or chosen uniformly in \( \ -1, 0, 1\ \). These parameter sets do not necessarily reflect implementation choices in the most commonly used homomorphic encryption libraries. For example, several libraries support dimensions that are not a power of two. Moreover, all known implementations for bootstrapping for the CKKS, BFV and BGV schemes use a sparse secret and a large ring dimension such as \( n \in \ 2^16, 2^17 \ \), and advanced applications such as logistic regression have used equally large dimensions. This motivates the community to consider widening the recommended parameter sets, and the purpose of this paper is to investigate such possible extensions. We explore the security of possible sparse-secret LWE parameter sets, taking into account hybrid attacks, which are often the most competitive in the sparse-secret regime. We present a conservative analysis of the hybrid decoding and hybrid dual attacks for parameter sets of varying sparsity, with the goal of balancing security requirements with bootstrapping efficiency. We also show how the methodology in the Standard can be easily adapted to support parameter sets with power-of-two dimension \( n \geq 2^16 \). We conclude with a number of discussion points to motivate future improvements to the Standard.
In the practical use of the Learning With Error (LWE) based cryptosystems, it is quite common to choose the secret to be extremely small: one popular choice is ternary ((±1,0),coefficient vector, and some further use ternary vector having only small numbers of nonzero coefficient, what is called \em sparse and ternary vector. This use of small secret also benefits to attack algorithms against LWE, and currently LWE-based cryptosystems including homomorphic encryptions (HE) set parameters based on the attack complexity of those improved attacks.
In this work, we revisit the well-known Howgrave-Graham’s hybrid attack, which was originally designed to solve the NTRU problem, with respect to sparse and ternary secret LWE case, and also refine the previous analysis for the hybrid attack in line with LWE setting. Moreover, upon our analysis we estimate attack complexity of the hybrid attack for several LWE parameters. As a result, we argue the currently used HE parameters should be raised to maintain the same security level by considering the hybrid attack; for example, the parameter set (n,log q,σ) = (65536, 1240, 3.2) with Hamming weight of secret key h = 64, which was estimated to satisfy ≥ 128$ bit-security by the previously considered attacks, is newly estimated to provide only 113 bit-security by the hybrid attack.
SESSION: Session 2: Multiparty Computation
Developing machine learning models from federated training data, containing many independent samples, is an important task that can significantly enhance the potential applicability and prediction power of learned models. Since single users, like hospitals or individual labs, typically collect data-sets that do not support accurate learning with high confidence, it is desirable to combine data from several users without compromising data privacy. In this paper, we develop a privacy-preserving solution for learning a linear regression model from data collectively contributed by several parties (“data owners”). Our protocol is based on the protocol of Giacomelli et al. (ACNS 2018) that utilized two non colluding servers and Linearly Homomorphic Encryption (LHE) to learn regularized linear regression models. Our methods use a different LHE scheme that allows us to significantly reduce both the number and runtime of homomorphic operations, as well as the total runtime complexity. Another advantage of our protocol is that the underlying LHE scheme is based on a different (and post-quantum secure) security assumption than Giacomelli et al. Our approach leverages the Chinese Remainder Theorem, and Single Instruction Multiple Data representations, to obtain our improved performance. For a 1000 x 40 linear regression task we can learn a model in a total of 3 seconds for the homomorphic operations, compared to more than 100 seconds reported in the literature. Our approach also scales up to larger feature spaces: we implemented a system that can handle a 1000 x 100 linear regression task, investing minutes of server computing time after a more significant offline pre-processing by the data owners. We intend to incorporate our protocol and implementations into a comprehensive system that can handle secure federated learning at larger scales.
We present modifications to the MPC system SCALE-MAMBA to enable the evaluation of garbled circuit (GC) based MPC functionalities and Linear Secret Sharing (LSSS) based MPC functionalities along side each other. This allows the user to switch between different MPC paradigms to achieve the best performance. To do this we present modifications to the GC-based MPC protocol of Hazay et al. (Asiacrypt 2017) (to enable it to support reactive computation), and combine different aspects of their pre-processing phase with those of Wang et al. (CCS 2017), in order to optimize our pre-processing protocols. We also give a more efficient method for producing daBits (double authenticated Bits) than that presented in the work of Rotaru and Wood (ePrint 2019). Finally, we examine how the functionality can be integrated within the existing MPC framework SCALE-MAMBA.
SESSION: Session 3: Applications of Homomorphic Encryption
In previous work, Boemer et al. introduced nGraph-HE, an extension to the Intel nGraph deep learning (DL) compiler, that enables data scientists to deploy models with popular frameworks such as TensorFlow and PyTorch with minimal code changes. However, the class of supported models was limited to relatively shallow networks with polynomial activations. Here, we introduce nGraph-HE2, which extends nGraph-HE to enable privacy-preserving inference on standard, pre-trained models using their native activation functions and number fields (typically real numbers). The proposed framework leverages the CKKS scheme, whose support for real numbers is friendly to data science, and a client-aided model using a two-party approach to compute activation functions. We first present CKKS-specific optimizations, enabling a 3x-88x runtime speedup for scalar encoding, and doubling the throughput through a novel use of CKKS plaintext packing into complex numbers. Second, we optimize ciphertext-plaintext addition and multiplication, yielding 2.6x-4.2x runtime speedup. Third, we exploit two graph-level optimizations: lazy-rescaling and depth-aware encoding, which allow us to significantly improve performance. Together, these optimizations enable state-of-the-art throughput of 1,998 images/s on the CryptoNets network. Using the client-aided model, we also present homomorphic evaluation of (to our knowledge) the largest network to date, namely, pre-trained MobileNetV2 models on the ImageNet dataset, with 60.4%/82.7% top-1/top-5 accuracy and an amortized runtime of 381 ms/image.
Homomorphic Encryption (HE) is an emerging technology that enables computing on data while the data is encrypted. A major challenge with homomorphic encryption is that it takes extensive expert knowledge to design meaningful and useful programs that are constructed from atomic HE operations.
We present RAMPARTS to address this challenge. RAMPARTS provides an environment for developing HE applications in Julia, a high-level language, the same way as “cleartext” applications are typically written in Julia. RAMPARTS makes the following three contributions. First, we use symbolic execution to automate the construction of an optimized computation circuit where both the circuit size and multiplicative depth are chosen by the compiler. Second, RAMPARTS automatically selects the HE parameters for the generated circuit, which is typically done manually by an HE expert. Third, RAMPARTS automatically selects the plaintext encoding for input values, and performs input and output data transformations. These three operations are not easily performed by programmers who are not HE experts. Thus, RAMPARTS makes HE more widely available and usable by the the population of programmers.
We compare our approach with Cingulata, the only previously known system that automatically generates circuits for HE computations. The HE circuits generated by RAMPARTS are significantly more efficient than the circuits compiled by Cingulata. For instance, our runtimes for key generation/circuit compilation and all online operations are more than one order of magnitude lower for a sample image processing application used for performance evaluation in our study.