SSR’19- Proceedings of the 5th ACM Workshop on Security Standardisation Research WorkshopFull Citation in the ACM Digital Library
SESSION: Session 1
Bluetooth Low Energy is the dominant wireless technology empowering the Internet-of-Things. It has recently been amended with Bluetooth Mesh, which promises secure low energy multi-hop wireless connectivity. Even older Bluetooth devices can become part of the network with a software-only upgrade. Bluetooth Mesh claims to be suitable for building large-scale multi-hop sensor networks with thousands of devices and up to 127 hops. In particular, it introduces the friendship concept, allowing low power Internet-of-Things devices to save energy by going into sleep mode, while their associated friend node caches their packets.
In this paper, we show that the security model underlying the friendship concept introduces a number of simplifying assumptions that can be harnessed against the Bluetooth Mesh network. We demonstrate three fundamental vulnerabilities in the security model that lead to denial-of-service and impersonation attacks. Furthermore, we experimentally prove that our denial-of-service attack significantly affects the battery life of power-constrained Internet-of-Things devices from a normal duration of two years to just few days.
In addition, we introduce btlemesh, an open-source tool to analyze Bluetooth Mesh and perform the aforementioned security tests in practice. Finally, we discuss possible countermeasures to mitigate these vulnerabilities.
SESSION: Session 2
Post-Quantum Variants of ISO/IEC Standards: Compact Chosen Ciphertext Secure Key Encapsulation Mechanism from Isogeny
ISO/IEC standardizes several chosen ciphertext-secure key encapsulation mechanism (KEM) schemes in ISO/IEC 18033-2. However, all ISO/IEC KEM schemes are not quantum resilient. In this paper, we introduce new isogeny-based KEM schemes (i.e., CSIDH-ECIES-KEM and CSIDH-PSEC-KEM) by modifying Diffie-Hellman-based KEM schemes in ISO/IEC standards. The main advantage of our schemes are compactness. The key size and the ciphertext overhead of our schemes are about five times smaller than these of SIKE-KEM which is submitted to NIST’s post-quantum cryptosystems standardization.
The RSA Probabilistic Signature Scheme (RSA-PSS) due to Bellare and Rogaway (EUROCRYPT 1996) is a widely deployed signature scheme. In particular it is a suggested replacement for the deterministic RSA Full Domain Hash (RSA-FDH) by Bellare and Rogaway (ACM CCS 1993) and PKCS# v1.5 (RFC 2313), as it can provide stronger security guarantees. It has since been shown by Kavki and Kiltz (EUROCRYPT 2012, Journal of Cryptology 2018) that RSA-FDH provides similar security to that of RSA-PSS, also in the case when RSA-PSS is not randomized. Recently, Jager, Kakvi and May (ACM CCS 2018) showed that PKCS#1 v1.5 gives comparable security to both RSA-FDH and RSA-PSS. However, all these proofs consider each signature scheme in isolation, where in practice this is not the case. The most interesting case is that in TLS 1.3, PKCS#1 v1.5 signatures are still included for reasons of backwards compatibility, meaning both RSA-PSS and PKCS#1 v1.5 signatures are implemented. To save space, the key material is shared between the two schemes, which means the aforementioned security proofs no longer apply. We investigate the security of this joint usage of key material in the context of Sibling Signatures, which were introduced by Camenisch, Drijvers, and Dubovitskaya (ACM CCS 2017). It must be noted that we consider the standardised version of RSA-PSS (IEEE Standard P1363-2000), which deviates from the original scheme considered in all previous papers. We are able to show that this joint usage is indeed secure, and achieves a security level that closely matches that of PKCS\#1 v1.5 signatures and that both schemes can be safely used, if the output lengths of the hash functions are chosen appropriately.
Millions of users routinely use Google to log in to websites supporting the standardised protocols OAuth 2.0 or OpenID Connect; the security of OAuth 2.0 and OpenID Connect is therefore of critical importance. As revealed in previous studies, in practice RPs often implement OAuth 2.0 incorrectly, and so many real-world OAuth 2.0 and OpenID Connect systems are vulnerable to attack. However, users of such flawed systems are typically unaware of these issues, and so are at risk of attacks which could result in unauthorised access to the victim user’s account at an RP. In order to address this threat, we have developed OAuthGuard, an OAuth 2.0 and OpenID Connect vulnerability scanner and protector, that works with RPs using Google OAuth 2.0 and OpenID Connect services. It protects user security and privacy even when RPs do not implement OAuth 2.0 or OpenID Connect correctly. We used OAuthGuard to survey the 1000 top-ranked websites supporting Google sign-in for the possible presence of five OAuth 2.0 or OpenID Connect security and privacy vulnerabilities, of which one has not previously been described in the literature. Of the 137 sites in our study that employ Google Sign-in, 69 were found to suffer from at least one serious vulnerability. OAuthGuard was able to protect user security and privacy for 56 of these 69 RPs, and for the other 13 was able to warn users that they were using an insecure implementation.
SESSION: Session 3
3GPP is currently studying AKMA (Authentication and Key Agreement for Applications): a mobile network service intended to support authentication and key management based on 3GPP credentials in 5G system, for third-party applications and 3GPP services. AKMA extends and evolves two earlier services that 3GPP specified for previous generations of mobile systems. Those are GBA (Generic Bootstrapping Architecture) and BEST (Battery Efficient Security for very low Throughput Machine Type Communication (MTC) devices), In this paper, we have first analyzed potential AKMA requirements in the 3GPP study of AKMA vs. GBA and BEST. We have identified two new privacy requirements that could be useful to protect the privacy of user transactions with AKMA application function against, e.g., an insider attacker in the home network of that user. Second, we have developed a privacy-mode for AKMA that fulfills those new requirements.
The importance of secure development of new technologies is unquestioned, yet the best methods to achieve this goal are far from certain. A key issue is that while significant effort is given to evaluating the outcomes of development (e.g., security of a given project), it is far more difficult to determine what organizational practices result in secure projects. In this paper, we quantitatively examine efforts to improve the consideration of security in Requests for Comments (RFCs)— the design documents for the Internet and many related systems — through the mandates and guidelines issued to RFC authors. We begin by identifying six metrics that quantify the quantity and quality of security informative content. We then apply these metrics longitudinally over 8,437 documents and 49 years of development to determine whether guidance to RFC authors changed these security metrics in later documents. We find that even a simply worded — but effectively enforced — mandate to explicitly consider security created a significant effect in increased discussion and topic coverage of security content both in and outside of a mandated security considerations section. We find that later guidelines with more detailed advice on security also improve both volume and quality of security informative content in RFCs. Our work demonstrates that even modest amounts of guidance can correlate to significant improvements in security focus in RFCs, indicating a promising approach for other network standards bodies.
SESSION: Session 4
While designers of cryptographic algorithms are rarely considered as potential adversaries, past examples, such as the standardization of the Dual EC PRNG highlights that the story might be more complicated.
To prevent the existence of backdoors, the concept of rigidity was introduced in the specific context of curve generation. The idea is to first state a strict scope statement for the properties that the curve needs to have and then pick e.g. the one with the smallest parameters. The aim is to ensure that the designers did not have the degrees of freedom that allows the addition of a trapdoor.
In this paper, we apply this approach to symmetric algorithms. The task is challenging because the corresponding primitives are more complex: they consist of several sub-components of different types, and the properties required by these sub-components to achieve the desired security level are not as clearly defined. Furthermore, security often comes in this case from the interplay between these components rather than from their individual properties.
In this paper, we argue that it is nevertheless necessary to demand that symmetric algorithms have a similar but, due to their different nature, more complex property which we call “unswervingness”. We motivate this need via a study of the literature on symmetric “kleptography” and via the study of some real-world standards. We then suggest some guidelines that could be used to leverage the unswervingness of a symmetric algorithm to standardize a highly trusted and equally safe variant of it.