Papers updated in last 31 days (301 results)
Pacmann: Efficient Private Approximate Nearest Neighbor Search
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes
with up to $63\%$ reduction in computation time and $24\%$ reduction in overall latency.
On the complexity of the problem of solving systems of tropical polynomial equations of degree two
In this paper, we investigate the computational complexity of the problem of solving a one-sided system of equations of degree two of a special form over the max-plus algebra. Also, we consider the asymptotic density of solvable systems of this form. Such systems have appeared during the analysis of some tropical cryptography protocols that were recently suggested. We show how this problem is related to the integer linear programming problem and prove that this problem is NP-complete. We show that the asymptotic density of solvable systems of this form with some restrictions on the coefficients, the number of variables, and the number of equations is 0. As a corollary, we prove that this problem (with some restrictions on the coefficients, the number of variables, and the number of equations) is decidable generically in polynomial time.
Weightwise Almost Perfectly Balanced Functions, Construction From A Permutation Group Action View
The construction of Boolean functions with good cryptographic properties over subsets of vectors with fixed Hamming weight is significant for lightweight stream ciphers like FLIP. In this article, we propose a general method to construct a class of Weightwise Almost Perfectly Balanced (WAPB) Boolean functions using the action of a cyclic permutation group on $\mathbb{F}_2^n$. This class generalizes the Weightwise Perfectly Balanced (WPB) $2^m$-variable Boolean function construction by Liu and Mesnager to any $n$. We show how to bound the nonlinearity and weightwise nonlinearities of functions from this construction. Additionally, we explore two significant permutation groups, $\langle \psi \rangle$ and $\langle \sigma \rangle$, where $\psi$ is a binary-cycle permutation and $\sigma$ is a rotation. We theoretically analyze the cryptographic properties of the WAPB functions derived from these permutations and experimentally evaluate their nonlinearity parameters for $n$ between 4 and 10.
Bypassing the characteristic bound in logUp
In this informal note, we describe how to bypass the characteristic bound in logUp [eprint 2022/1530] by abstracting the notion of (pole) multiplicity. The method applies as well to the GKR-variant from Papini and Haböck [eprint 2023/1284], and it moreover unlocks fractional decomposition lookups over binary fields.
Exact Template Attacks with Spectral Computation
The so-called Gaussian template attacks (TA) is one of the optimal Side-Channel Analyses (SCA) when the measurements are captured with normal noise.
In the SCA literature, several optimizations of its implementation are introduced, such as coalescence and spectral computation. The coalescence consists of averaging traces corresponding to the same plaintext value, thereby coalescing (synonymous: compacting) the dataset. Spectral computation consists of sharing the computational workload when estimating likelihood across key hypotheses.
State-of-the-art coalescence leverages the Law of Large Numbers (LLN) to compute the mean of equivalent traces.
This approach comes with a drawback because the LLN is just an asymptotic approximation.
So it does not lead to an exact Template Attack, especially for a few number of traces.
In this paper, we introduce a way of calculating the TA exactly and with the same computational complexity (using the spectral approach), without using the LLN, regardless of the number of messages.
For the experimental validation of this approach, we use the ANSSI SCA Database (ASCAD), with different numbers of messages and different amounts of samples per trace.
Recall that this dataset concerns a software implementation of AES-128 bits, running on an ATMEGA-8515 microprocessor.
COCO: Coconuts and Oblivious Computations for Orthogonal Authentication
Authentication often bridges real-world individuals and their virtual public identities, like usernames, user IDs and e-mails, exposing vulnerabilities that threaten user privacy. This research introduces COCO (Coconuts and Oblivious Computations for Orthogonal Authentication), a framework that segregates roles among Verifiers, Authenticators, and Clients to achieve privacy-preserving authentication.
COCO eliminates the need for Authenticators to directly access virtual public identifiers or real-world identifiers for authentication. Instead, the framework leverages Oblivious Pseudorandom Functions (OPRFs) and an extended Coconut Credential Scheme to ensure privacy by introducing separate unlinkable orthogonal authentication identifiers and a full-consensus mechanism to perform zero-knowledge authentications whose proof-s are unlinkable across multiple sessions. Authentication process becomes self-contained, preventing definitive reverse tracing of virtual public identifiers to real-world identifiers.
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Partial Exposure Attacks Against a Family of RSA-like Cryptosystems
An RSA generalization using complex integers was introduced by Elkamchouchi, Elshenawy, and Shaban in 2002. This scheme was further extended by Cotan and Teșeleanu to Galois fields of order $n \geq 1$. In this generalized framework, the key equation is $ed - k (p^n-1)(q^n-1) = 1$, where $p$ and $q$ are prime numbers. Note that, the classical RSA, and the Elkamchouchi \emph{et al.} key equations are special cases, namely $n=1$ and $n=2$. In addition to introducing this generic family, Cotan and Teșeleanu describe a continued fractions attack capable of recovering the secret key $d$ if $d < N^{0.25n}$. This bound was later improved by Teșeleanu using a lattice based method. In this paper, we explore other lattice attacks that could lead to factoring the modulus $N = pq$. Namely, we propose a series of partial exposure attacks that can aid an adversary in breaking this family of cryptosystems if certain conditions hold.
Security Analysis of a Color Image Encryption Scheme Based on a Fractional‑Order Hyperchaotic System
In 2022, Hosny et al. introduce an image encryption scheme that employs a fractional-order chaotic system. Their approach uses the hyper-chaotic system to generate the system's main parameter, namely a secret permutation which is dependent on the size and the sum of the pixels of the source image. According to the authors, their scheme offers adequate security (i.e. $498$ bits) for transmitting color images over unsecured channels. Nevertheless, in this paper we show that the scheme's security is independent on the secret parameters used to initialize the hyper-chaotic system. More precisely, we provide a brute-force attack whose complexity is $\mathcal O(2^{10.57}(WH)^3)$ and needs $2^{9.57}WH$ oracle queries, where $W$ and $H$ are the width and the height of the encrypted image. For example, for an image of size $4000 \times 3000$ ($12$ megapixels image) we obtain a security margin of $81.11$ bits, which is six times lower than the claimed bound. To achieve this result, we present two cryptanalytic attacks, namely a chosen plaintext attack and a chosen ciphertext attack.
(Deep) Learning about Elliptic Curve Cryptography
Motivated by the interest in elliptic curves both from a theoretical (algebraic geometry) and applied (cryptography) perspective, we conduct a preliminary study on the underlying mathematical structure of these mathematical structures.
Hence, this paper mainly focuses on investigating artificial intelligence techniques to enhance the efficiency of Schoof's algorithm for point counting across various elliptic curve distributions, achieving varying levels of success.
The Number of the Beast: Reducing Additions in Fast Matrix Multiplication Algorithms for Dimensions up to 666
While a naive algorithm for multiplying two 2 × 2 matrices requires
eight multiplications and four additions, Strassen showed how to compute the same matrix product using seven multiplications and 18 additions. Winograd reduced the number of additions to 15, which was assumed to be optimal. However, by introducing a change of basis, Karstadt and Schwartz showed how to lower the number of additions to 12, which they further showed to be optimal within this generalized Karstadt-Schwartz (KS) framework. Since the cost of the change of basis is smaller than one addition (per each recursive level), it is disregarded in this cost metric.
In this work we present improved methods for classical optimization of the number of additions in Strassen-type matrix multiplication schemes, but for larger matrix sizes, and without including any change of basis.
We find specific examples where our methods beat the best instances found within the KS framework, the most impressive one being Laderman’s algorithm for multiplying 3 × 3 matrices, which we reduce from the naive 98 additions to 62, compared to 74 in the KS framework. We indicate that our approach performs better compared to previous work within the KS framework, as the matrix dimensions increase.
We additionally apply our algorithms to another reference set of algorithms due to Moosbauer and Kauers for which we do not have results in the KS framework as a comparison. For parameters (n, m, k), when multiplying an (n × m) matrix with an (m × k) matrix, the schoolbook algorithm uses nk(m − 1) additions. Using the reference set of algorithms we find that our algorithm leads to an optimized number of additions of roughly cnk(m − 1), where c is a small constant which is independent of the dimensions.
We further provide concrete and practical implementations of our methods that are very efficient for dimensions including (and surpassing) the 666 limit, i.e. (n, m, k) = (6, 6, 6), in our reference set of fast multiplication algorithms.
Security Analysis of SFrame
Increasing privacy consciousness has popularized the use of end-to-end encryption (E2EE). In this paper, we discuss the security of SFrame, an E2EE mechanism proposed to the Internet Engineering Task Force for video/audio group communications over the Internet. Despite being a quite recent project, SFrame has been deployed in several real-world applications. The original specification of SFrame is evaluated herein to find critical issues that can cause impersonation (forgery) attacks with a practical complexity by a malicious group member. Further investigations have revealed that these issues are present in several publicly available SFrame implementations. Therefore, we provide several countermeasures against all the proposed attacks and considerations from performance and security perspectives toward their implementation.
Two Halves Make a Whole: How to Reconcile Soundness and Robustness in Watermarking for Large Language Models
Watermarking techniques have been used to safeguard AI-generated content. In this paper, we study publicly detectable watermarking schemes (Fairoze et al.), and have several research findings.
First, we observe that two important security properties, robustness and soundness, may conflict with each other. We then formally investigate these two properties in the presence of an arguably more realistic adversary that we called editing-adversary, and we can prove an impossibility result that, the robustness and soundness properties cannot be achieved via a publicly-detectable single watermarking scheme.
Second, we demonstrate our main result: we for the first time introduce the new concept of publicly- detectable dual watermarking scheme, for AI-generated content. We provide a novel construction by using two publicly-detectable watermarking schemes; each of the two watermarking schemes can achieve “half” of the two required properties: one can achieve robustness, and the other can achieve soundness. Eventually, we can combine the two halves into a whole, and achieve the robustness and soundness properties at the same time. Our construction has been implemented and evaluated.
Programming Equation Systems of Arithmetization-Oriented Primitives with Constraints
Arithmetization-Oriented (AO) cryptographic algorithms operate on large finite fields. The most threatening attack on such designs is the Gröbner basis attack, which solves the equation system encoded from the cryptanalysis problem. However, encoding a primitive as a system of equations is not unique, and finding the optimal one with low solving complexity is a challenge. This paper introduces an automatic tool that converts the CICO problem into a Mixed-Integer Quadratic Constraint Programming (MIQCP) model, using integer variables and constraints to track degree propagation and determine variable introduction points. The optimal MIQCP solution provides the lowest solving complexity. We build models for Griffin, Anemoi, and Ciminion permutations to cover modules comprehensively. Experiments show reduced Gröbner basis attack complexity, lower than designers’ bounds for small numbers of rounds, e.g. up to 8 rounds for Griffin.This tool can be used for security evaluation against Gröbner basis attack in new designs.
Miller Inversion is Easy for the Reduced Tate Pairing of Embedding Degree Greater than one
We present algorithms for Miller inversion for the reduced Tate pairing with embedding degree k>1. Let q be a number of elements of field of definition of an elliptic curve. For even k, our algorithm run deterministically with O((k log q)^3) bit operations. For odd k, out algorithm run probabilistically with O(k^6 (log q)^3) bit operations in average.
PEReDi: Privacy-Enhanced, Regulated and Distributed Central Bank Digital Currencies
Central Bank Digital Currencies (CBDCs) aspire to offer a digital replacement for physical cash and as such need to tackle two fundamental requirements that are in conflict. On the one hand, it is desired they are private so that a financial “panopticon” is avoided, while on the other, they should be regulation friendly in the sense of facilitating any threshold-limiting, tracing, and counterparty auditing functionality that is necessary to comply with regulations such as Know Your Customer (KYC), Anti Money Laundering (AML) and Combating Financing of Terrorism (CFT) as well as financial stability considerations.
In this work, we put forth a new asynchronous model for CBDCs and an efficient construction that, for the first time, fully addresses these issues simultaneously. Moreover, recognizing the importance of avoiding a single point of failure, our construction is distributed so that all its properties can withstand a suitably bounded entities getting corrupted by an adversary. Achieving all the above properties efficiently is technically involved; among others, our construction uses suitable cryptographic tools to thwart man-in-the-middle attacks, it showcases a novel traceability mechanism with significant performance gains compared to previously known techniques and, perhaps surprisingly, shows how to obviate Byzantine agreement or broadcast from the optimistic execution path of a payment, something that results in an essentially optimal communication pattern and communication overhead. We demonstrate the efficiency of our payment system by presenting detailed computation and communication costs. Going beyond “simple” payments, we also discuss how our scheme can facilitate one-off large transfers complying with Know Your Transaction (KYT) disclosure requirements. Our CBDC concept is expressed and realized in the Universal Composition (UC) framework providing in this way a modular and secure way to embed it within a larger financial ecosystem.
Minimizing the Use of the Honest Majority in YOSO MPC with Guaranteed Output Delivery
Cleve (STOC 86) shows that an honest majority is necessary for MPC with guaranteed output delivery. In this paper, we show that while an honest majority is indeed necessary, its involvement can be minimal. We demonstrate an MPC protocol with guaranteed output delivery, the majority of which is executed by a sequence of committees with dishonest majority; we leverage one committee with an honest majority, each member of which does work independent of the circuit size. Our protocol has the desirable property that every participant speaks only once (YOSO, Crypto 2021).
As a building block of independent interest, we introduce public computation, which is essentially privacy-free MPC with guaranteed output delivery (akin to smart contracts realized on blockchains). We instantiate public computation on a public bulletin board in three different ways (with different assumption / round / space utilization trade-offs).
A Heuristic Proof of P $\neq$ NP
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We prove that P $\neq$ NP as it shows that the square-root complexity is necessary to solve the Add/XNOR problem. That is, problems that are verifiable in polynomial time are not necessarily solvable in polynomial time.
Analysis of four protocols based on tropical circulant matrices
Several key exchange protocols based on tropical circulant matrices were proposed in the last two years. In this paper, we show that protocols offered by M. Durcheva [M. I. Durcheva. TrES: Tropical Encryption Scheme Based on Double Key Exchange. In: Eur. J. Inf. Tech. Comp. Sci. 2.4 (2022), pp. 11–17], by B. Amutha and R. Perumal [B. Amutha and R. Perumal. Public key exchange protocols based on tropical lower circulant and anti-circulant matrices. In: AIMS Math. 8.7 (2023), pp. 17307–17334.], and by H. Huang, C. Li, and L. Deng [H. Huang, C. Li, and L. Deng. Public-Key Cryptography Based on Tropical Circular Matrices. In: Appl. Sci. 12.15 (2022), p. 7401] are insecure.
"These results must be false": A usability evaluation of constant-time analysis tools
Cryptography secures our online interactions, transactions, and trust. To achieve this goal, not only do the cryptographic primitives and protocols need to be secure in theory, they also need to be securely implemented by cryptographic library developers in practice.
However, implementing cryptographic algorithms securely is challenging, even for skilled professionals, which can lead to vulnerable implementations, especially to side-channel attacks. For timing attacks, a severe class of side-channel attacks, there exist a multitude of tools that are supposed to help cryptographic library developers assess whether their code is vulnerable to timing attacks. Previous work has established that despite an interest in writing constant-time code, cryptographic library developers do not routinely use these tools due to their general lack of usability. However, the precise factors affecting the usability of these tools remain unexplored. While many of the tools are developed in an academic context, we believe that it is worth exploring the factors that contribute to or hinder their effective use by cryptographic library developers.
To assess what contributes to and detracts from usability of tools that verify constant-timeness (CT), we conducted a two-part usability study with 24 (post) graduate student participants on 6 tools across diverse tasks that approximate real-world use cases for cryptographic library developers.
We find that all studied tools are affected by similar usability issues to varying degrees, with no tool excelling in usability, and usability issues preventing their effective use.
Based on our results, we recommend that effective tools for verifying CT need usable documentation, simple installation, easy to adapt examples, clear output corresponding to CT violations, and minimal noninvasive code markup. We contribute first steps to achieving these with limited academic resources, with our documentation, examples, and installation scripts.
Logstar: Efficient Linear* Time Secure Merge
Secure merge considers the problem of combining two sorted lists into a single sorted secret-shared list. Merge is a fundamental building block for many real-world applications. For example, secure merge can implement a large number of SQL-like database joins, which are essential for almost any data processing task such as privacy-preserving fraud detection, ad conversion rates, data deduplication, and many more.
We present two constructions with communication bandwidth and rounds tradeoff. Logstar, our bandwidth-optimized construction, takes inspiration from Falk and Ostrovsky (ITC, 2021) and runs in $O(n\log^*n)$ time and communication with $O(\log n)$ rounds. In particular, for all conceivable $n$, the $\log^*n$ factor will be equal to the constant $2$, and therefore we achieve a near-linear running time. Median, our rounds-optimized construction, builds on the classic parallel medians-based insecure merge approach of Valiant (SIAM J. Comput., 1975), later explored in the secure setting by Blunk et al. (2022), and requires $O(n \log^c n)$, $c \approx 1.71$, communication with $O(\log \log n)$ rounds.
We introduce two additional constructions that merge input lists of different sizes. SquareRootMerge merges lists of sizes $n^{\frac{1}{2}}$ and $n$ and runs in $O(n)$ time and communication with $O(\log n)$ rounds. CubeRootMerge is closely inspired by Blunk et al.'s (2022) construction and merges lists of sizes $n^{\frac{1}{3}}$ and $n$. It runs in $O(n)$ time and communication with $O(1)$ rounds.
We optimize our constructions for concrete efficiency. Today, concretely efficient secure merge protocols rely on standard techniques such as Batcher's merging network or generic sorting. These approaches require an $O(n \log n)$ size circuit of $O(\log n)$ depth. Despite significant research thrust, no work has been able to reduce their concrete costs. Our constructions are the first to be more efficient by improving their asymptotics and maintaining small constants. We analytically benchmark against these constructions and show that Logstar reduces bandwidth costs $\approx2.0\times$ and Median reduces rounds $\approx1.5\times$.
Anonymous credentials from ECDSA
Anonymous digital credentials allow a user to prove possession of an attribute that has been asserted by an identity issuer without revealing any extra information about themselves. For example, a user who has received a digital passport credential can prove their “age is $>18$” without revealing any other attributes such as their name or date of birth.
Despite inherent value for privacy-preserving authentication, anonymous credential schemes have been difficult to deploy at scale. Part of the difficulty arises because schemes in the literature, such as BBS , use new cryptographic assumptions that require system-wide changes to existing issuer infrastructure. In addition, issuers often require digital identity credentials to be *device-bound* by incorporating the device’s secure element into the presentation flow. As a result, schemes like BBS require updates to the hardware secure elements and OS on every user's device.
In this paper, we propose a new anonymous credential scheme for the popular and legacy-deployed Elliptic Curve Digital Signature Algorithm (ECDSA) signature scheme. By adding efficient zk arguments for statements about SHA256 and document parsing for ISO-standardized identity formats, our anonymous credential scheme is that first one that can be deployed *without* changing any issuer processes, *without* requiring changes to mobile devices, and *without* requiring non-standard cryptographic assumptions.
Producing ZK proofs about ECDSA signatures has been a bottleneck for other ZK proof systems because standardized curves such as P256 use finite fields which do not support efficient number theoretic transforms. We overcome this bottleneck by designing a ZK proof system around sumcheck and the Ligero argument system, by designing efficient methods for Reed-Solomon encoding over the required fields, and by designing specialized circuits for ECDSA.
Our proofs for ECDSA can be generated in 60ms. When incorporated into a fully standardized identity protocol such as the ISO MDOC standard, we can generate a zero-knowledge proof for the MDOC presentation flow in 1.2 seconds on mobile devices depending on the credential size. These advantages make our scheme a promising candidate for privacy-preserving digital identity applications.
Bounded CCA Secure Proxy Re-encryption Based on Kyber
Proxy re-encryption (PRE) allows semi-honest party (called proxy) to convert a ciphertext under a public key into a ciphertext under another public key. Due to this functionality, there are various applications such as encrypted email forwarding, key escrow, and securing distributed file systems. Meanwhile, post-quantum cryptography (PQC) is one of the most important research areas because development of quantum computers has been advanced recently. In particular, there are many researches on public key encryption (PKE) algorithms selected/submitted in the NIST (National Institute of Standards and Technology) PQC standardization. However, there is no post-quantum PRE scheme secure against adaptive chosen ciphertext attacks (denoted by CCA security) while many (post-quantum) PRE schemes have been proposed so far. In this paper, we propose a bounded CCA secure PRE scheme based on CRYSTALS-Kyber which is a selected algorithm in the NIST PQC competition. To this end, we present generic constructions of bounded CCA secure PRE. Our generic constructions start from PRE secure against chosen plaintext attacks (denoted by CPA security). In order to instantiate our generic constructions, we present a CPA secure PRE scheme based on CRYSTALS-Kyber.
Blue fish, red fish, live fish, dead fish
We show that the DAG-based consensus protocol Tusk [DKSS22] does not achieve liveness, at least under certain reasonable assumptions on the implementation that are consistent with its specification. In addition, we give a simple 2-round variation of Tusk with lower latency and strong liveness properties, but with suboptimal resilience. We also show that another 2-round protocol, GradedDAG [DZX 24], which has optimal resilience, also has liveness problems analogous to Tusk.
Learning with Errors from Nonassociative Algebras
We construct a provably-secure structured variant of Learning with Errors (LWE) using nonassociative cyclic division algebras, assuming the hardness of worst-case structured lattice problems, for which we are able to give a full search-to-decision reduction, improving upon the construction of Grover et al. named `Cyclic Learning with Errors' (CLWE). We are thus able to create structured LWE over cyclic algebras without any restriction on the size of secret spaces, which was required for CLWE as a result of its restricted security proof. We reduce the shortest independent vectors problem in ideal lattices, obtained from ideals in orders of such algebras, to the decision variant of LWE defined for nonassociative CDAs. We believe this variant has greater security and greater freedom with parameter choices than CLWE, and greater asymptotic efficiency of multiplication than module LWE. Our reduction requires new results in the ideal theory of such nonassociative algebras, which may be of independent interest. We then adapt an LPR-like PKE scheme to hold for nonassociative spaces, and discuss the efficiency and security of our construction, showing that it is immune to certain subfield attacks. Finally, we give example parameters to construct algebras for cryptographic use.
Leveraging remote attestation APIs for secure image sharing in messaging apps
Sensitive pictures such as passport photos and nudes are commonly shared through mobile chat applications. One popular strategy for the privacy protection of this material is to use ephemeral messaging features, such as the view once snaps in Snapchat. However, design limitations and implementation bugs in messaging apps may allow attackers to bypass the restrictions imposed by those features on the received material. One way by which attackers may accomplish so is by tampering with the software stack on their own devices. In this paper, we propose and test a protection strategy based on a multiplatform system that encrypts and decrypts sensitive pictures on top of messaging apps and performs remote attestation with available app integrity APIs to safeguard its security. Our analysis and experiments show that, compared to previous proposals for image encryption in a middleware, remote attestation offers increased security, adds privacy benefits, simplifies integration, and improves usability by not requiring users to exchange key material a priori. In our experiments, it incurs an added average latency of 3.8 and 4.5 seconds when sending and receiving private pictures, respectively.
NTWE: A Natural Combination of NTRU and LWE
Lattice-based cryptosystems are some of the primary post-quantum secure alternatives to the asymmetric cryptography that is used today. These lattice-based cryptosystems typically rely on the hardness of some version of either the NTRU or the LWE problem. In this paper, we present the NTWE problem, a natural combination of the NTRU and LWE problems, and construct a new lattice-based cryptosystem based on the hardness of the NTWE problem.
As with the NTRU and LWE problems, the NTWE problem naturally corresponds to a problem in a $q$-ary lattice. This allows the hardness of the NTWE problem to be estimated in the same way as it is estimated for the LWE and NTRU problems. We parametrize our cryptosystem from such a hardness estimate and the resulting scheme has performance that is competitive with that of typical lattice-based schemes.
In some sense, our NTWE-based cryptosystem can be seen as a less structured and more compact version of a cryptosystem based on the module-NTRU problem. Thus, parameters for our cryptosystem can be selected with the flexibility of a module-LWE-based scheme, while other properties of our system are more similar to those in an NTRU-based system.
Basic Lattice Cryptography: The concepts behind Kyber (ML-KEM) and Dilithium (ML-DSA)
This tutorial focuses on describing the fundamental mathematical concepts and design decisions used in the two ``main'' lattice schemes standardized by NIST and included in the CNSA 2.0 algorithmic suite. They are the KEM / encryption scheme CRYSTALS-Kyber (ML-KEM) and the signature scheme CRYSTALS-Dilithium (ML-DSA) . In addition, we will also give the main ideas behind other lattice-based KEMs like Frodo and NTRU.
Zeroed Out: Cryptanalysis of Weak PRFs in Alternating Moduli
The growing adoption of secure multi-party computation (MPC) has driven the development of efficient symmetric key primitives tailored for MPC. Recent advancements, such as the alternating moduli paradigm, have shown promise but leave room for cryptographic and practical improvements. In this paper, we analyze a family of weak pseudorandom functions (wPRF) proposed at Crypto 2024, focusing on the One-to-One parameter sets. We demonstrate that these configurations fail to achieve their intended one-to-one mappings and exploit this observation to develop an efficient key recovery attack. The attacks reveal significant vulnerabilities, reducing the complexity of key recovery to O(2^(λ/2) log_2 (λ)) for the Standard One-to-One wPRF and O(2^(0.84λ)) for the Reversed Moduli variant– both substantially below their claimed λ-bit security. We validate our findings through experimental evaluations, confirming alignment between predicted and observed attack complexities.
Greedy Algorithm for Representative Sets: Applications to IVLBC and GIFT-64 in Impossible Differential Attack
The impossible differential (ID) attack is crucial for analyzing the strength of block ciphers. The critical aspect of this technique is to identify IDs, and the researchers introduced several methods to detect them. Recently, the researchers extended the mixed-integer linear programming (MILP) approach by partitioning the input and output differences to identify IDs. The researchers proposed techniques to determine the representative set and partition table of a set over any nonlinear function. In this paper, we introduce a deterministic algorithm using the greedy approach~\cite{cormen2022introduction} for finding the representative set and partition table of a set over any nonlinear function. This algorithm iteratively selects the set that covers the maximum uncovered elements of the target set. This performs better than the existing algorithms in terms of time complexity. We use this algorithm to compute the representative sets and partition tables of the two block ciphers, IVLBC and GIFT-64, and identify 6-round IDs for them. Our research contributes a deterministic algorithm to compute the representative set and partition table of a set over any nonlinear function with at most 16-bit input size.
Optimally Secure TBC Based Accordion Mode
The design of tweakable wide block ciphers has advanced significantly over the past two decades. This evolution began with the approach of designing a wide block cipher by Naor and Reingold. Since then, numerous tweakable wide block ciphers have been proposed, many of which build on existing block ciphers and are secure up to the birthday bound for the total number of blocks queried. Although there has been a slowdown in the development of tweakable wide block cipher modes in last couple of years, the latest NIST proposal for accordion modes has reignited interest and momentum in the design and analysis of these ciphers. Although new designs have emerged, their security often falls short of optimal (i.e., $n$-bit) security, where $n$ is the output size of the primitive. In this direction, designing an efficient tweakable wide block cipher with $n$-bit security seems to be an interesting research problem. An optimally secure tweakable wide block cipher mode can easily be turned into a misuse-resistant RUP secure authenticated encryption scheme with optimal security. This paper proposes $\textsf{HCTR }$, which turns an $n$-bit tweakable block cipher (TBC) with $n$-bit tweak into a variable input length tweakable block cipher. Unlike tweakable $\textsf{HCTR}$, $\textsf{HCTR }$ ensures $n$-bit security regardless of tweak repetitions. We also propose two TBC-based almost-xor-universal hash functions, named $\textsf{PHASH }$ and $\textsf{ZHASH }$, and use them as the underlying hash functions in the $\textsf{HCTR }$ construction to create two TBC-based $n$-bit secure tweakable wide block cipher modes, $\textsf{PHCTR }$ and $\textsf{ZHCTR }$. Experimental results show that both $\textsf{PHCTR }$ and $\textsf{ZHCTR }$ exhibit excellent software performance when their underlying TBC is instantiated with $\textsf{Deoxys-BC-128-128}$.
Ring Signatures for Deniable AKEM: Gandalf's Fellowship
Ring signatures, a cryptographic primitive introduced by Rivest, Shamir and Tauman (ASIACRYPT 2001), offer signer anonymity within dynamically formed user groups. Recent advancements have focused on lattice-based constructions to improve efficiency, particularly for large signing rings. However, current state-of-the-art solutions suffer from significant overhead, especially for smaller rings.
In this work, we present a novel NTRU-based ring signature scheme, Gandalf, tailored towards small rings. Our post-quantum scheme achieves a 50% reduction in signature sizes compared to the linear ring signature scheme Raptor (ACNS 2019). For rings of size two, our signatures are approximately a quarter the size of DualRing (CRYPTO 2021), another linear scheme, and remain more compact for rings up to size seven. Compared to the sublinear scheme Smile (CRYPTO 2021), our signatures are more compact for rings of up to 26. In particular, for rings of size two, our ring signatures are only 1236 bytes.
Additionally, we explore the use of ring signatures to obtain deniability in authenticated key exchange mechanisms (AKEMs), the primitive behind the recent HPKE standard used in MLS and TLS. We take a fine-grained approach at formalising sender deniability within AKEM and seek to define the strongest possible notions. Our contributions extend to a black-box construction of a deniable AKEM from a KEM and a ring signature scheme for rings of size two. Our approach attains the highest level of confidentiality and authenticity, while simultaneously preserving the strongest forms of deniability in two orthogonal settings. Finally, we present parameter sets for our schemes, and show that our deniable AKEM, when instantiated with our ring signature scheme, yields ciphertexts of 2004 bytes.
Multiparty Shuffle: Linear Online Phase is Almost for Free
Shuffle is a frequently used operation in secure multiparty computations, with various applications, including joint data analysis and anonymous communication systems. Most existing MPC shuffle protocols are constructed from MPC permutation protocols, which allows a party to securely apply its private permutation to an array of $m$ numbers shared among all $n$ parties. Following a ``permute-in-turn'' paradigm, these protocols result in $\Omega(n^2m)$ complexity in the semi-honest setting. Recent works have significantly improved efficiency and security by adopting a two-phase solution. Specifically, Eskandarian and Boneh demonstrate how to construct MPC shuffle protocols with linear complexity in both semi-honest and malicious adversary settings. However, a more recent study by Song et al. reveals that Eskandarian and Boneh's protocol fails to achieve malicious security. Consequently, designing an MPC shuffle protocol with linear complexity and malicious security remains an open question.
In this paper, we address this question by presenting the first general construction of MPC shuffle protocol that is maliciously secure and has linear online communication and computation complexity, utilizing black-box access to secure arithmetic MPC primitives and MPC permutation protocol. When instantiating our construction with the SPDZ framework and the best existing malicious secure MPC shuffle, our construction only slightly increases the offline overhead compared to the semi-honest secure version, and thus achieve a linear online phase almost for free. As our constructions requires only black-box access to basic secure MPC primitives and permutation protocols, they are compatible with and can be integrated to most modern MPC frameworks. We provide formal security proofs for both semi-honest and malicious settings, demonstrating that our maliciously secure construction can achieve universally composable security. Experimental results indicate that our construction significantly enhances online performance while maintaining a moderate increase in offline overhead. Given that shuffle is a frequently used primitive in secure multiparty computation, we anticipate that our construction will accelerate many real-world MPC applications.
Isogeny interpolation and the computation of isogenies from higher dimensional representations
The Supersingular Isogeny Diffie-Hellman (SIDH) scheme is a public key cryptosystem that was submitted to the National Institute of Standards and Technology's competition for the standardization of post-quantum cryptography protocols. The private key in SIDH consists of an isogeny whose degree is a prime power. In July 2022, Castryck and Decru discovered an attack that completely breaks the scheme by recovering Bob's secret key, using isogenies between higher dimensional abelian varieties to interpolate and reconstruct the isogenies comprising the SIDH private key. The original attack applies in theory to any prime power degree, but the implementation accompanying the original attack required one of the SIDH keys involved in a key exchange to have degree equal to a power of $2$. An implementation of the power of $3$ case was published subsequently by Decru and Kunzweiler. However, despite the passage of several years, nobody has published any implementations for prime powers other than $2$ or $3$, and for good reason --- the necessary higher dimensional isogeny computations rapidly become more complicated as the base prime increases. In this paper, we provide for the first time a fully general isogeny interpolation implementation that works for any choice of base prime, and provide timing benchmarks for various combinations of SIDH base prime pairs. We remark that the technique of isogeny interpolation now has constructive applications as well as destructive applications, and that our methods may open the door to increased flexibility in constructing isogeny-based digital signatures and cryptosystems.
Guardianship in Group Key Exchange for Limited Environments
Post-compromise security (PCS) has been a core goal of end-to-end encrypted messaging applications for many years, both in one-to-one continuous key agreement (CKA) and for groups (CGKA). At its essence, PCS relies on a compromised party to perform a key update in order to `self-heal'. However, due to bandwidth constraints, receive-only mode, and various other environmental demands of the growing number of use cases for such CGKA protocols, a group member may not be able to issue such updates. In this work, we address the issue of devices functioning in limited mode through the introduction of guardianship, where a designated guardian can perform key updates on the behalf of its paired edge device. We introduce a Guardianship PCS (GPCS) security, and provide an associated security experiment. We investigate various architectural designs in the pursuit of GPCS, provide constructions and security analyses, and describe trade-offs.
Improved Rejection Sampling for Compact Lattice Signatures
One of the primary approaches used to construct lattice-based signature schemes is through the “Fiat-Shamir with aborts” methodology. Such a scheme may abort and restart during signing which corresponds to rejection sampling produced signatures to ensure that they follow a distribution that is independent of the secret key. This rejection sampling is only feasible when the output distribution is sufficiently wide, limiting how compact this type of signature schemes can be.
In this work, we develop a new method to construct signatures influenced by the rejection condition. This allows our rejection sampling to target significantly narrower output distributions than previous approaches, allowing much more compact signatures. The combined size of a signature and a verification key for the resulting scheme is less than half of that for ML-DSA and comparable to that of compact hash-and-sign lattice signature schemes, such as Falcon.
Generation of two ''independent'' points on an elliptic curve of $j$-invariant $\neq 0, 1728$
This article is dedicated to a new generation method of two ``independent'' $\mathbb{F}_{\!q}$-points $P_0$, $P_1$ on almost any ordinary elliptic curve $E$ over a finite field $\mathbb{F}_{\!q}$ of large characteristic. In particular, the method is relevant for all standardized and real-world elliptic curves of $j$-invariants different from $0$, $1728$. The points $P_0$, $P_1$ are characterized by the fact that nobody (even a generator) knows the discrete logarithm $\log_{P_0}(P_1)$ in the group $E(\mathbb{F}_{\!q})$. Moreover, only one square root extraction in $\mathbb{F}_{\!q}$ (instead of two ones) is required in comparison with all previous generation methods.
Simple Power Analysis assisted Chosen Cipher-Text Attack on ML-KEM
Recent work proposed by Bernstein et al. (from EPRINT 2024) identified two timing attacks, KyberSlash1 and KyberSlash2, targeting ML-KEM decryption and encryption algorithms, respectively, enabling efficient recovery of secret keys. To mitigate these vulnerabilities, correctives were promptly applied across implementations. In this paper, we demonstrate a very simple side-channel-assisted power analysis attack on the patched implementations of ML-KEM. Our result showed that original timing leakage can be shifted to power consumption leakage that can be exploited on specific data. We performed a practical validation of this attack on both the standard and a shuffled implementations of ML-KEM on a Cortex-M4 platform, confirming its effectiveness. Our approach enables the recovery of the ML-KEM secret key in just 30 seconds for the standard implementation, and approximately 3 hours for the shuffled implementation, achieving a 100% success rate in both cases.
Breaking and Provably Restoring Authentication: A Formal Analysis of SPDM 1.2 including Cross-Protocol Attacks
The SPDM (Security Protocol and Data Model) protocol is a standard under development by the DMTF consortium, and supported by major industry players including Broadcom, Cisco, Dell, Google, HP, IBM, Intel, and NVIDIA. SPDM 1.2 is a complex protocol that aims to provide platform security, for example for communicating hardware components or cloud computing scenarios.
In this work, we provide the first holistic, formal analysis of SPDM 1.2: we model the full protocol flow of SPDM considering all of its modes – especially the complex interaction between its different key-exchange modes – in the framework of the Tamarin prover, making our resulting model one of the most complex Tamarin models to date. To our surprise, Tamarin finds a cross-protocol attack that allows a network attacker to completely break authentication of the pre-shared key mode. We implemented our attack on the SPDM reference implementation, and reported the issue to the SPDM developers. DMTF registered our attack as a CVE with CVSS rating 9 (critical).
We propose a fix and develop the first formal symbolic proof using the Tamarin prover for the fixed SPDM 1.2 protocol as a whole. The resulting model of the main modes and their interactions is highly complex, and we develop supporting lemmas to enable proving properties in the Tamarin prover, including the absence of all cross-protocol attacks. Our fix has been incorporated into both the reference implementation and the newest version of the standard. Our results highlight the need for a holistic analysis of other internet standards and the importance of providing generalized security guarantees across entire protocols.
Oryx: Private detection of cycles in federated graphs
This paper proposes Oryx, a system for efficiently detecting cycles in federated graphs where parts of the graph are held by different parties and are private. Cycle identification is an important building block in designing fraud detection algorithms that operate on confidential transaction data held by different financial institutions. Oryx allows detecting cycles of various length while keeping the topology of the graphs secret, and it does so efficiently. Oryx leverages the observation that financial graphs are very sparse, and uses this to achieve computational complexity that scales with the average degree of nodes in the graph rather than the maximum degree. Our implementation of Oryx running on a single 32-coreAWS ma chine (for each party) can detect all cycles of up to length 6 in under 5 hours in a financial transaction graph that consists of tens of millions of nodes and edges. While the costs are high, Oryx’s protocol parallelizes well and can use additional hardware resources. Furthermore, Oryx is, to our knowledge, the first system that can handle this task for large graphs.
Simulation Secure Multi-Input Quadratic Functional Encryption: Applications to Differential Privacy
Multi-input functional encryption is a primitive that allows for the evaluation of an $\ell$-ary function over multiple ciphertexts, without learning any information about the underlying plaintexts. This type of computation is useful in many cases where one has to compute over encrypted data, such as privacy-preserving cloud services, federated learning, or more generally delegation of computation from multiple clients. It has recently been shown by Alborch et al. in PETS '24 to be useful to construct a randomized functional encryption scheme for obtaining differentially private data analysis over an encrypted database supporting linear queries.
In this work we propose the first secret-key multi-input quadratic functional encryption scheme satisfying simulation security. Current constructions supporting quadratic functionalities, proposed by Agrawal et al. in CRYPTO '21 and TCC '22, only reach indistinguishibility-based security. Our proposed construction is generic, and for a concrete instantiation, we propose a new function-hiding inner-product functional encryption scheme proven simulation secure against one challenge ciphertext in the standard model, which is of independent interest. We then use these two results to construct an efficient randomized quadratic functional encryption scheme, from which we obtain differentially private data analysis over an encrypted database supporting quadratic queries. Finally, we give and fully benchmark an implementation of the randomized scheme. This work is an extended version of the paper "Simulation Secure Multi-Input Quadratic Functional Encryption" at SAC '24, where the multi-input quadratic functional encryption scheme and function-hiding inner-product functional encryption schemes were first presented (Section 3 and Seciton 4).
Reducing the Share Size of Weighted Threshold Secret Sharing Schemes via Chow Parameters Approximation
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets.
In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general.
In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation.
We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1 o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties.
In addition, we apply the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] to construct computational secret sharing schemes whose share size is polylogarithmic in the number of parties.
BBB Secure Arbitrary Length Tweak TBC from n-bit Block Ciphers
At FSE'15, Mennink introduced two tweakable block ciphers, $\widetilde{F}[1]$ and $\widetilde{F}[2]$, both utilizing an $n$-bit tweak. It was demonstrated that $\widetilde{F}[1]$ is secure for up to $2^{2n/3}$ queries, while $\widetilde{F}[2]$ is secure for up to $2^n$ queries, assuming the underlying block cipher is an ideal cipher with $n$-bit key and $n$-bit data. Later, at ASIACRYPT'16, Wang et al. showed a birthday bound attack on Mennink's design (which was later corrected in the eprint version {\textbf eprint 2015/363}) and proposed 32 new candidates for tweakable block ciphers that are derived from $n$-bit ideal block ciphers. It was shown that all the $32$ constructions are provably secure up to $2^n$ queries. All the proposed designs by both Mennink and Wang et al. admit only $n$-bit tweaks. In FSE'23, Shen and Standaert proposed a tweakable block cipher, $\widetilde{G2}$, which uses $2n$-bit tweaks and is constructed from three $n$-bit block cipher calls, proving its security up to $n$ bits, assuming that the underlying block cipher is an ideal cipher. They have also shown that it is impossible to design a tweakable block cipher with $2n$-bit tweaks using only two $n$-bit block cipher calls while achieving security beyond the birthday bound. In this paper, we advance this research further. We show that any tweakable block cipher design with $3n$-bit tweaks based on only three block cipher calls, where at least one key is tweak-independent, is vulnerable to a birthday bound distinguishing attack. We then propose a tweakable block cipher, $\widetilde{\textsf{G}_3}^*$ that uses three block cipher calls and admits $3n$-bit tweaks, achieves security up to $O(2^{2n/3})$ queries when all three block cipher keys are tweak-dependent. Furthermore, we prove that using four ideal block cipher calls, with at least one key being tweak-dependent, is necessary and sufficient to achieve $n$-bit security for a tweakable block cipher that admits $3n$-bit tweaks. Finally, we propose a tweakable block cipher, $\widetilde{\textsf{G}_r}$, which uses $(r 1)$ block cipher calls and processes $rn$-bit tweaks, achieving security up to $O(2^n)$ queries when at least one block cipher key is tweak-dependent.
An Algorithmic Approach to $(2,2)$-isogenies in the Theta Model and Applications to Isogeny-based Cryptography
In this paper, we describe an algorithm to compute chains of $(2,2)$-isogenies between products of elliptic curves in the theta model. The description of the algorithm is split into various subroutines to allow for a precise field operation counting.
We present a constant time implementation of our algorithm in Rust and an alternative implementation in SageMath. Our work in SageMath runs ten times faster than a comparable implementation of an isogeny chain using the Richelot correspondence. The Rust implementation runs up to forty times faster than the equivalent isogeny in SageMath and has been designed to be portable for future research in higher-dimensional isogeny-based cryptography.
Accelerating Isogeny Walks for VDF Evaluation
VDFs are characterized by sequential function evaluation but an immediate output verification. In order to ensure secure use of VDFs in real-world applications, it is important to determine the fastest implementation. Considering the point of view of an attacker (say with unbounded resources), this paper aims to accelerate the isogeny-based VDF proposed by De Feo-Mason-Petit-Sanso in 2019. It is the first work that implements a hardware accelerator for the evaluation step of an isogeny VDF. To meet our goal, we use redundant representations of integers and introduce a new lookup table-based algorithm for modular reduction. We also provide both a survey of elliptic curve arithmetic to arrive at the most cost-effective curve computations and an in-depth cost analysis of the different base degree isogeny and method for the isogeny evaluation. The evaluation step of a VDF is defined to be sequential, which means that there is limited scope for parallelism. Nevertheless, taking this constraint into account our proposed design targets the highest levels of parallelism possible on an architectural level of an isogeny VDF implementation. We provide a technology-independent metric to model the delay of isogeny evaluation, which a VDF developer can use to derive secure parameters. ASIC synthesis results in 28nm are used as a baseline to estimate VDF parameters.
On Linear Equivalence, Canonical Forms, and Digital Signatures
Given two linear codes, the code equivalence problem asks to find an isometry mapping one code into the other.
The problem can be described in terms of group actions and, as such, finds a natural application in signatures derived from a Zero-Knowledge Proof system.
A recent paper, presented at Asiacrypt 2023, showed how a proof of equivalence can be significantly compressed by describing how the isometry acts only on an information set. Still, the resulting signatures are far from being optimal, as the size for a witness to this relation is still significantly larger than the theoretical lower bound, which is twice the security parameter.
In this paper, we fill this gap and propose a new notion of equivalence, which leads to a drastically reduced witness size. For many cases, the resulting size is exactly the optimal one given by the lower bound. We achieve this by introducing the framework of canonical representatives, that is, representatives for classes of codes which are equivalent under some notion of equivalence. We propose new notions of equivalence which encompass and further extend all the existing ones: this allows to identify broader classes of equivalent codes, for which the equivalence can be proved with a very compact witness. We associate these new notions to a specific problem, called Canonical Form Linear Equivalence Problem (CF-LEP), which we show to be as hard as the original one (when random codes are considered), providing reductions in both ways. As an added consequence, this reduction leads to a new solver for the code equivalence problem, which is the fastest solver when the finite field size is large enough. Finally, we show that our framework yields a remarkable reduction in signature size when compared to the LESS submission.
Our variant is able to obtain very compact signatures, around 2 KB or less, which are among the smallest in the code-based setting.
Efficient Fuzzy Private Set Intersection from Fuzzy Mapping
Private set intersection (PSI) allows Sender holding a set \(X\) and Receiver holding a set \(Y\) to compute only the intersection \(X\cap Y\) for Receiver.
We focus on a variant of PSI, called fuzzy PSI (FPSI), where Receiver only gets points in \(X\) that are at the distance not greater than a threshold from some points in \(Y\).
Most current FPSI approaches first pick out pairs of points that are potentially close and then determine whether the distance of each selected pair is indeed small enough to yield FPSI result.
Their complexity bottlenecks stem from the excessive number of point pairs selected by the first picking process.
Regarding this process, we consider a more general notion, called fuzzy mapping (Fmap), which can map each point of two parties to a set of identifiers, with closely located points having a same identifier, which forms the selected point pairs.
We initiate the formal study on Fmap and show novel Fmap instances for Hamming and \(L_\infty\) distances to reduce the number of selected pairs.
We demonstrate the powerful capability of Fmap with some superior properties in constructing FPSI variants and provide a generic construction from Fmap to FPSI.
Our new Fmap instances lead to the fastest semi-honest secure FPSI protocols in high-dimensional space to date, for both Hamming and general \(L_{\mathsf p\in [1, \infty]}\) distances.
For Hamming distance, our protocol is the first one that achieves strictly linear complexity with input sizes.
For \(L_{\mathsf p\in [1, \infty]}\) distance, our protocol is the first one that achieves linear complexity with input sizes, dimension, and threshold.
How to Compress Garbled Circuit Input Labels, Efficiently
Garbled Circuits are essential building blocks in cryptography, and extensive research has explored their construction from both applied and theoretical perspectives. However, a challenge persists: While theoretically designed garbled circuits offer optimal succinctness--remaining constant in size regardless of the underlying circuit’s complexit--and are reusable for multiple evaluations, their concrete computational costs are prohibitively high. On the other hand, practically efficient garbled circuits, inspired by Yao’s garbled circuits, encounter limitations due to substantial communication bottlenecks and a lack of reusability.
To strike a balance, we propose a novel concept: online-offline garbling. This approach leverages instance-independent and (partially) reusable preprocessing during an offline phase, to enable the creation of constant-size garbled circuits in an online phase, while maintaining practical efficiency. Specifically, during the offline stage, the garbler generates and transmits a reference string, independent of the computation to be performed later. Subsequently, in the online stage, the garbler efficiently transforms a circuit into a constant-size garbled circuit. The evaluation process relies on both the reference string and the garbled circuit.
We demonstrate that by leveraging existing tools such as those introduced by Applebaum et al. (Crypto’13) and Chongwon et al. (Crypto’17), online-offline garbling can be achieved under a variety of assumptions, including the hardness of Learning With Errors (LWE), Computational Diffie-Hellman (CDH), and factoring. In contrast, without the help of an offline phase, constant-size garbling is only feasible under the LWE and circular security assumptions, or the existence of indistinguishability obfuscation. However, these schemes are still very inefficient, several orders of magnitude more costly than Yao-style garbled circuits.
To address this, we propose a new online-offline garbling scheme based on Ring LWE. Our scheme offers both asymptotic and concrete efficiency. It serves as a practical alternative to Yao-style garbled circuits, especially in scenarios where online communication is constrained. Furthermore, we estimate the concrete latency using our approach in realistic settings and demonstrate that it is 2-20X faster than using Yao-style garbled circuits. This improvement is estimated without taking into account parallelization of computation, which can lead to further performance improvement using our scheme.
Homomorphic Trapdoors for Identity-based and Group Signatures
Group signature (GS) schemes are an important primitive in cryptography that provides anonymity and traceability for a group of users. In this paper, we propose a new approach to constructing GS schemes using the homomorphic trapdoor function (HTDF). We focus on constructing an identity-based homomorphic signature (IBHS) scheme using the trapdoor, providing a simpler scheme that has no zero-knowledge proofs. Our scheme allows packing more data into the signatures by elevating the existing homomorphic trapdoor from the SIS assumption to the MSIS assumption to enable packing techniques. Compared to the existing group signature schemes, we provide a straightforward and alternate construction that is efficient and secure under the standard model. Overall, our proposed scheme provides an efficient and secure solution for GS schemes using HTDF.
The Fiat--Shamir Transformation of $(\Gamma_1,\dots,\Gamma_\mu)$-Special-Sound Interactive Proofs
The Fiat--Shamir transformation is a general principle to turn any public-coin interactive proof into non-interactive one (with security then typically analyzed in the random oracle model). While initially used for 3-round protocols, many recent constructions use it for multi-round protocols. However, in general the soundness error of the Fiat--Shamir transformed protocol degrades exponentially in the number of rounds. On the positive side, it was shown that for the special class of $(k_1,\dots,k_\mu)$-special-sound protocols the loss is actually only linear in the number of random oracle queries, and independent of the number of rounds, which is optimal.
A natural next question is whether this positive result extends to the Fiat--Shamir transformation of so-called $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound protocols, a notion recently defined and analyzed in the interactive case, with the aim to capture a more general notion of special-soundness.
We show in this work that this is indeed the case. Concretely, we show that the Fiat--Shamir transformation of any $(\Gamma_1, \ldots, \Gamma_\mu)$-special-sound interactive proof is knowledge sound under the same condition for which the original interactive proof is knowledge sound. Furthermore, also here the loss is linear in the number of random oracle queries and independent of the number of rounds.
In light of the above, one might suspect that our argument follows as a straightforward combination of the above mentioned prior works. However, this is not the case. The approach used for $(k_1,\dots,k_\mu)$-special-sound protocols, which is based on an extractor that samples without replacement, does not (seem to) generalize; on the other hand, the other approach, which uses an extractor based on sampling with replacement, comes with an additional loss that would blow up in the recursive multi-round analysis. Thus, new techniques are necessary to handle the above complications.
Distributed Asynchronous Remote Key Generation
Asynchronous Remote Key Generation (ARKG) is a primitive introduced by Frymann et al. at ACM CCS 2020. It enables a sender to generate a new public key $pk'$ for a receiver ensuring only it can, at a later time, compute the corresponding private key $sk'$. These key pairs are indistinguishable from freshly generated ones and can be used in various public-key cryptosystems such as digital signatures and public-key encryption. ARKG has been explored for applications in WebAuthn credential backup and delegation, as well as for enhancing receiver privacy via stealth addresses.
In this paper, we introduce distributed ARKG (dARKG) aiming to provide similar security properties in a distributed setting. Here, a sender generates $pk'$ for a group of $n$ receivers and the corresponding $sk'$ can only be computed by any sub-group of size $t\leq n$. This introduces threshold-based access protection for $sk'$, enabling for instance a set of proxies to jointly access a WebAuthn account or claim blockchain funds.
We construct dARKG using one-round publicly verifiable asymmetric key agreement, called 1PVAKA, a new primitive formalized in this work. Unlike traditional distributed key generation protocols where users interact with one another, 1PVAKA is asynchronous and allows a third party to verify and generate a public key from users' outputs.
We discuss 1PVAKA and dARKG instantiations tailored for use with bilinear groups and demonstrate practicality with implementation and performance analysis for the BLS12-381 curve.
Decompressing Dilithium's Public Key with Fewer Signatures Using Side Channel Analysis
The CRYSTALS-Dilithium digital signature scheme, selected by NIST as a post-quantum cryptography (PQC) standard under the name ML-DSA, employs a public key compression technique intended for performance optimization. Specifically, the module learning with error instance $({\bf A}, {\bf t})$ is compressed by omitting the low-order bits ${\bf t_0}$ of the vector ${\bf t}$. It was recently shown that knowledge of ${\bf t_0}$ enables more effective side-channel attacks on Dilithium implementations. Another recent work demonstrated a method for reconstructing ${\bf t_0}$ from multiple signatures. In this paper, we build on this method by applying profiled deep learning-assisted side-channel analysis to partially recover the least significant bit of ${\bf t_0}$ from power traces. As a result, the number of signatures required for the reconstruction of ${\bf t_0}$ can be reduced by roughly half. We demonstrate how the new ${\bf t_0}$ reconstruction method enhances the efficiency of recovering the secret key component ${\bf s}_1$, and thus facilitates digital signature forgery, on an ARM Cortex-M4 implementation of Dilithium.
Efficient Differentially Private Set Intersection
Private Set Intersection (PSI) enables a sender and a receiver to jointly compute the intersection of their sets without disclosing non-trivial information about other items. However, depending on the context of joint data analysis, information derived from the items in the intersection may also be considered sensitive. To protect such sensitive information, prior work proposed Differentially private PSI (DPSI), which can be instantiated with circuit-PSI using Fully Homomorphic Encryption. Although asymptotically efficient, its concrete performance is sub-optimal compared with the practical state-of-the-art (SOTA) circuit-PSI.
In this paper, we propose two generic DPSI constructions with provable security and privacy. We revisit the DPSI definition and identify the critical criteria for selecting essential PSI-related tools. Then, we present two generic DPSI constructions. The first construction allows us to achieve provable privacy and efficiency by integrating any circuit-PSI with the randomized response mechanism. By plugging the SOTA circuit-PSI protocol, we obtain a DPSI protocol with concrete performance enhancement. The second construction offers a more efficient DPSI alternative by using multi-query Reverse Private Membership Test (mqRPMT) at the price of intersection size leakage, but such leakages can be bounded with differential privacy by padding random dummy items in input sets. We conduct comprehensive experiments with various instantiations. The experiments show that our instantiations significantly outperform the existing DPSI construction: 2.5-22.6$\times$ more communication-efficient and up to 110.5-151.8$\times$ faster. Our work also shows a new application for mqRPMT besides obtaining Private Set Operation (PSO).
Cryptanalysis of TETRA Encryption Algorithms - Episode 1: TEA-3
We present the first public and in-depth cryptanalysis of TEA-3, a stream cipher used in TETRA radio networks that was kept secret until recently. While the same also holds for the six other TETRA encryption algorithms, we pick TEA-3 to start with as (i) it is not obviously weakened as TEA-{1,4,7} but (ii) in contrast to TEA-2 it is approved only for extra-European emergency service, and (iii) as already noted by [MBW23] the TEA-3 design surprisingly contains a non-bijective S-box. Most importantly, we show that the 80-bit non-linear feedback shift register operating on the key decomposes into a cascade of two 40-bit registers. Although this hints at an intentional weakness at first glance, we are not able to lift our results to a practical attack. Other than that, we show how the balanced non-linear feedback functions used in the state register of TEA-3 can be constructed.
MixBuy: Contingent Payment in the Presence of Coin Mixers
A contingent payment protocol involves two mutually distrustful parties, a buyer and a seller, operating on the same blockchain, and a digital product, whose ownership is not tracked on a blockchain (e.g. a digital book). The buyer holds coins on the blockchain and transfers them to the seller in exchange for the product. However, if the blockchain does not hide transaction details, any observer can learn that a buyer purchased some product from a seller.
In this work, we take contingent payment a step further: we consider a buyer who wishes to buy a digital product from a seller routing the payment via an untrusted mixer.
Crucially, we require that said payment is unlinkable, meaning that the mixer (or any other observer) does not learn which buyer is paying which seller. We refer to such setting as unlinkable contingent payment (UCP).
We present MixBuy, a system that realizes UCP. Mixbuy relies on oracle-based unlinkable contingent payment (O-UCP), a novel four-party cryptographic protocol where the mixer pays the seller and the seller provides the buyer with the product only if a semi-trusted notary attests that the buyer has paid the mixer. More specifically, we require four security notions: (i) mixer security that guarantees that if the mixer pays the seller, the mixer must get paid from the buyer; (ii) seller security that guarantees that if the seller delivers the product to the buyer, the seller must get paid from the mixer; (iii) buyer security that guarantees that if the buyer pays the mixer, the buyer must obtain the product; and (iv) unlinkability that guarantees that given a set of buyers and sellers, the mixer should not learn which buyer paid which seller.
We present a provably secure and efficient cryptographic construction for O-UCP. Our construction can be readily used to realize UCP on most blockchains, as it has minimal functionality requirements (i.e., digital signatures and timelocks). To demonstrate the practicality of our construction, we provide a proof of concept for O-UCP and our benchmarks in commodity hardware show that the communication overhead is small (a few kB per message) and the running time is below one second.
Cryptographic Commitments on Anonymizable Data
Local Differential Privacy (LDP) mechanisms consist of (locally) adding controlled noise to data in order to protect the privacy of their owner. In this paper, we introduce a new cryptographic primitive called LDP commitment. Usually, a commitment ensures that the committed value cannot be modified before it is revealed. In the case of an LDP commitment, however, the value is revealed after being perturbed by an LDP mechanism. Opening an LDP commitment therefore requires a proof that the mechanism has been correctly applied to the value, to ensure that the value is still usable for statistical purposes. We also present a security model for this primitive, in which we define the hiding and binding properties. Finally, we present a concrete scheme for an LDP staircase mechanism (generalizing the randomized response technique), based on classical cryptographic tools and standard assumptions. We provide an implementation in Rust that demonstrates its practical efficiency (the generation of a commitment requires just a few milliseconds). On the application side, we show how our primitive can be used to ensure simultaneously privacy, usability and traceability of medical data when it is used for statistical studies in an open science context. We consider a scenario where a hospital provides sensitive patients data signed by doctors to a research center after it has been anonymized, so that the research center can verify both the provenance of the data (i.e. verify the doctors’ signatures even though the data has been noised) and that the data has been correctly anonymized (i.e. is usable even though it has been anonymized).
Efficient Error-tolerant Side-channel Attacks on GPV Signatures Based on Ordinary Least Squares Regression
The Gentry-Peikert-Vaikuntanathan (GPV) framework is utilized for constructing digital signatures, which is proven to be secure in the classical/quantum random-oracle model. Falcon is such a signature scheme, recognized as a compact and efficient signature among NIST-standardized signature schemes. Although a signature scheme based on the GPV framework is theoretically highly secure, it could be vulnerable to side-channel attacks and hence further research on physical attacks is required to make a robust signature scheme.
We propose a general secret key recovery attack on GPV signatures using partial information about signatures obtained from side-channel attack. The three main contributions are summarized as follows.
First, we introduce, for the first time, a concept of vulnerable partial information of GPV signatures and propose a secret key recovery attack, called OLS attack, which effectively utilizes partial information. In contrast to the approaches of Guerreau et al. (CHES 2022) and Zhang et al. (Eurocrypt 2023), which utilize filtered (or processed) signatures with hidden parallelepiped or learning slice schemes, the OLS attack leverages all the available signatures without filtering. We prove that the secret key recovered by the OLS attack converges to the real secret key in probability as the number of samples increases.
Second, we utilize Gaussian leakage as partial information for the OLS attack on Falcon. As a result, the OLS attack shows a significantly higher success rate with fewer samples than the existing attack schemes. Furthermore, by incorporating the DDGR attack, the OLS attack can recover the secret key using much less samples with a success rate close to 100%. Moreover, we propose more efficient OLS attack on Falcon, which reduces the number of required side-channel attacks.
Third, we propose an error-tolerant power analysis attack using MAP decoding, which effectively corrects the errors in samples to utilize Gaussian leakage correctly. In conclusion, the OLS attack is expected to strengthen the security of the GPV signatures including Falcon.
On the BUFF Security of ECDSA with Key Recovery
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). We pay attention to that the verification algorithm of KR-ECDSA takes an Ethereum address addr as input, which is defined as the rightmost 160-bits of the Keccak-256 hash of the corresponding ECDSA verification key, and checks the hash value of the recovered verification key is equal to addr. Our security analysis shows that this procedure is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify the above equality check is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against the Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021). We emphasize that the attack is positioned outside of their security model.
A Note on Isogeny Group Action-Based Pseudorandom Functions
In PKC'24, de Saint Guilhem and Pedersen give a pseudorandom function basing on a relaxed group action assumption in the semi-honest setting. Basing on the assumption, they build an oblivious pseudorandom function (OPRF). Later, a recent paper by Levin and Pedersen uses the same function to build a verifiable random function (VRF), using the same assumption.
We give a structural attack on this problem by reducing it to a few group action inverse problems (GAIP/DLog) over small subgroups. This reduction allows us to apply a CRT-based attack to recover the secret key, ultimately lowering the problem’s effective security strength to under 70 classical bits when using CSIDH-512. Hence the strength of their pseudorandom functions is bounded above by the GAIP over the largest prime order subgroup. Clearly, Kuperberg’s subexponential attack can be used to further reduce its quantum security.
Multi-party Setup Ceremony for Generating Tokamak zk-SNARK Parameters
This document provides a specification guide for the Multi-party Computation (MPC) setup ceremony for the Tokamak zk-SNARK scheme. It begins by revisiting the MMORPG protocol proposed in BGM17 for Groth16 setup generation, which leverages a random beacon to ensure public randomness. Additionally, it explores the alternative design approach presented in the ``Snarky Ceremonies" paper KMSV21, which removes the need for a random beacon. The document includes a detailed pseudocode and workflow for each stage of parameter generation in the Tokamak zk-SNARK protocol.
Tokamak zk-SNARK employs a universal setup through sub-circuits, which allows for CRS reuse across multiple circuits. This approach reduces the need for repeated trusted setups and emphasizes efficiency in verifier preprocessing. The document also introduces pseudocodes for various types of parameter generation during the MPC setup. This includes the generation of parameters like Powers of $\tau$, circuit-specific parameters, and different types of mappings across both the random beacon and non-random beacon based approaches. These pseudocodes ensure clarity in the protocol's step-by-step process, from the computation of shared parameters to verifying correctness.
Finally, the document presents a sketch security analysis of both protocols, relying on the Algebraic Group Model (AGM) and the Random Oracle Model (ROM) to prove knowledge soundness and security of the generated CRS. The analysis considers potential attacks and demonstrates that, even without a random beacon, the setup remains secure under the assumptions of these models.
SeaSearch: Secure and Efficient Selection Queries
Information-theoretic or unconditional security provides the highest level of security --- independent of the computational capability of an adversary. Secret-sharing techniques achieve information-theoretic security by splitting a secret into multiple parts (called shares) and storing the shares across non-colluding servers. However, secret-sharing-based solutions suffer from high overheads due to multiple communication rounds among servers and/or information leakage due to access-patterns (i.e.., the identity of rows satisfying a query) and volume (i.e., the number of rows satisfying a query).
We propose SeaSearch, an information-theoretically secure approach that uses both additive and multiplicative secret-sharing, to efficiently support a large class of selection queries involving conjunctive, disjunctive, and range conditions. Two major contributions of SeaSearch are:
(i) a new search algorithm using additive shares based on fingerprints, which were developed for string-matching over cleartext; and
(ii) two row retrieval algorithms: one is based on multiplicative shares and another is based on additive shares.
SeaSearch does not require communication among servers storing shares and does not reveal any information to an adversary based on access-patterns and volume.
SoK: Pseudorandom Generation for Masked Cryptographic Implementation
This paper investigates pseudorandom generation in the context of masked cryptographic implementation. Although masking and pseudorandom generators (PRGs) have been distinctly studied for a long time, little literature studies how the randomness in the masked implementation should be generated. The lack of analysis on mask-bits generators makes the practical security of masked cryptographic implementation unclear, and practitioners (e.g., designer, implementer, and evaluator) may be confused about how to realize it. This paper provides a novel viewpoint and comprehensive analyses by developing new three models, which correspond to respective practical scenarios of leakage assessment, quantitative evaluation of side-channel security (e.g., success rate), and practical deployment. We reveal what properties are required for each scenario. In particular, we support a long-held belief/folklore with a proof: for the output of PRG for masking, cryptographic security (i.e., randomness and unpredictability) is sufficient but not necessary, but only a statistical uniformity is necessary. In addition, we thoroughly investigate the SCA security of PRGs in the wild in the masking context. We conclude this paper with some recommendations for practitioners, with a proposal of leakage-resilient method of comparative performance.
Verified Foundations for Differential Privacy
Differential privacy (DP) has become the gold standard for privacy-preserving data analysis, but implementing
it correctly has proven challenging. Prior work has focused on verifying DP at a high level, assuming the
foundations are correct and a perfect source of randomness is available. However, the underlying theory of
differential privacy can be very complex and subtle. Flaws in basic mechanisms and random number generation
have been a critical source of vulnerabilities in real-world DP systems.
In this paper, we present SampCert, the first comprehensive, mechanized foundation for differential privacy.
SampCert is written in Lean with over 12,000 lines of proof. It offers a generic and extensible notion of DP, a
framework for constructing and composing DP mechanisms, and formally verified implementations of Laplace
and Gaussian sampling algorithms. SampCert provides (1) a mechanized foundation for developing the next
generation of differentially private algorithms, and (2) mechanically verified primitives that can be deployed in
production systems. Indeed, SampCert’s verified algorithms power the DP offerings of Amazon Web Services
(AWS), demonstrating its real-world impact.
SampCert’s key innovations include: (1) A generic DP foundation that can be instantiated for various DP
definitions (e.g., pure, concentrated, Rényi DP); (2) formally verified discrete Laplace and Gaussian sampling
algorithms that avoid the pitfalls of floating-point implementations; and (3) a simple probability monad and
novel proof techniques that streamline the formalization. To enable proving complex correctness properties of
DP and random number generation, SampCert makes heavy use of Lean’s extensive Mathlib library, leveraging
theorems in Fourier analysis, measure and probability theory, number theory, and topology.
The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). To our knowledge, it is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem; the circuit also achieves sublinear depth and nearly linear gate count. We build on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. Our circuit completely factors any number $N$, whose prime decomposition has distinct exponents, and finds at least one non-trivial factor if not all exponents are the same. In particular, to factor an $n$-bit integer $N=P^2 Q$ (with $P$ and $Q$ prime, and $Q<2^m$ for some $m$), our circuit uses $\widetilde{O}(m)$ qubits and has depth at most $\widetilde{O}(m n/m)$, with $\widetilde{O}(n)$ quantum gates. When $m=\Theta(n^a)$ with $2/3 < a < 1$, the space and depth are sublinear in $n$, yet no known classical algorithms exploit the relatively small size of $Q$ to run faster than general-purpose factoring algorithms. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known.
The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of $A$ mod $B$, in the regime where $B$ is classical and much larger than $A$. Crucially, our circuit reads the bits of the classical value $B$ in a streaming fashion, never storing more than $\widetilde{O}(\log A)$ qubits of quantum information at one time. In the context of the larger Jacobi algorithm for factoring $N = P^2Q$, this reduces the overall qubit count to be roughly proportional to the length of $Q$, rather than the length of $N$. Our circuit for computing the Jacobi symbol is also highly gate-efficient and parallelizable, achieving gate count $\widetilde{O}(\log B)$ and depth at most $\widetilde{O}(\log A \log B/\log A)$. Finally, we note that our circuit for computing the Jacobi symbol generalizes to related problems, such as computing the greatest common divisor, and thus could be of independent interest.
Revisiting Boomerang Attacks on Lightweight ARX and AND-RX Ciphers with Applications to KATAN, SIMON and CHAM
In this paper, we investigate the security of lightweight block ciphers, focusing on those that utilize the ADD-Rotate-XOR (ARX) and AND-Rotate-XOR (AND-RX) design paradigms. More specifically, we examine their resilience against boomerang-style attacks. First, we propose an automated search strategy that leverages the boomerang connectivity table (BCT) for AND operations ($\wedge BCT$) to conduct a complete search for boomerang and rectangle distinguishers for AND-RX ciphers. The proposed search strategy automatically considers all possible $\wedge BCT$ switches in the middle of the boomerang to optimise distinguishing probability. The correctness of the search strategy was verified experimentally. We were able to find the best boomerang and rectangle distinguishers to date in the single-key model for lightweight block ciphers KATAN32/48/64} and SIMON32/48. Next, we investigated BCT properties of ARX ciphers and discovered that a truncated boomerang switch could be formulated for the lightweight ARX cipher, CHAM. We were able to find the best single-key and related-key rectangle distinguishers to date for Cham. Our findings provide more accurate security margins of these lightweight ciphers against boomerang-style attacks.
Adaptive Special Soundness: Improved Knowledge Extraction by Adaptive Useful Challenge Sampling
Proving knowledge soundness of an interactive proof from scratch is often a challenging task. This has motivated the evelopment of various special soundness frameworks which, in a nutshell, separate knowledge extractors into two parts: (1) an extractor to produce a set of accepting transcripts conforming to some structure; (2) a witness recovery algorithm to recover a witness from a set of transcripts with said structure. These frameworks take care of (1), so it suffices for a protocol designer
to specify (2) which is often simple(r).
Recently, works by Bünz–Fisch (TCC’23) and Aardal et al. (CRYPTO’24) provide new frameworks, called almost special soundness and predicate special soundness, respectively. To handle insufficiencies of special soundness, they deviate from its spirit and augment it in different ways. The necessity for their changes is that special soundness does not allow the challenges for useful sets of transcripts to depend on the transcripts themselves, but only on the challenges in the transcripts. As a consequence, (generalised) special soundness cannot express extraction strategies which reduce a computational problem to finding “inconsistent” accepting transcripts, for example in PCP/IOP-based or lattice-based proof systems, and thus provide (very) sub-optimal extractors. In this work, we introduce adaptive special soundness which captures extraction strategies exploiting inconsistencies between transcripts, e.g. transcripts containing different openings of the same commitment. Unlike (generalised) special soundness (Attema, Fehr, and Resch (TCC’23)), which specifies a target transcript structure, our framework allows specifying an extraction strategy which guides the extractor to sample challenges adaptively based on the history of prior transcripts. We extend the recent (almost optional) extractor of Attema, Fehr, Klooß and Resch (EPRINT 2023/1945) to our notion, and argue that it encompasses almost special soundness and predicate special soundness in many cases of interest.
As a challenging application, we modularise and generalise the lattice Bulletproofs analysis by Bünz–Fisch (TCC’23) using the adaptive special soundness framework. Moreover, we extend their analysis to the ring setting for a slightly wider selection of rings than rational integers.
A summary on the FRI low degree test
This document is an informal summary on the FRI low degree test [BSBHR18a], [BSCI 20], and DEEP algebraic linking from [BSGKS20]. Based on its most recent soundness analysis [BSCI 20], we discuss parameter settings for practical security levels, how FRI is turned into a polynomial commitment scheme, and the soundness of DEEP sampling in the list decoding regime. In particular, we illustrate the DEEP method applied to proving satisfiability of algebraic intermediate representations and prove a soundness error bound which slightly improves the one in [Sta21].
Multilateral Trade Credit Set-off in MPC via Graph Anonymization and Network Simplex
Multilateral Trade Credit Set-off (MTCS) is a process run by a service provider that collects trade credit data (i.e. obligations from a firm to pay another firm) from a network of firms and detects cycles of debts that can be removed from the system. The process yields liquidity savings for the participants, who can discharge their debts without relying on expensive loans. We propose an MTCS protocol that protects firms' sensitive data, such as the obligation amount or the identity of the firms they trade with. Mathematically, this is analogous to solving the Minimum Cost Flow (MCF) problem over a graph of $n$ firms, where the $m$ edges are the obligations. State-of-the-art techniques for Secure MCF have an overall complexity of $O(n^{10} \log n)$ communication rounds, making it barely applicable even to small-scale instances. Our solution leverages novel secure techniques such as Graph Anonymization and Network Simplex to reduce the complexity of the MCF problem to $O(max(n, \log\log{n m}))$ rounds of interaction per pivot operations in which $O(max(n^2, nm))$ comparisons and multiplications are performed. Experimental results show the tradeoff between privacy and optimality.
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However, no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it cryptographically.
The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities.
We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly by the set of standard EUF-CMA signature schemes that are consistent and linear time. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality. We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83) in the UC framework. Our result can be interpreted as a step toward a sound realization of the Dolev-Yao methodology. We also generalize our
result to the threshold setting.
Simple is COOL: Graded Dispersal and its Applications for Byzantine Fault Tolerance
The COOL protocol of Chen (DISC'21) is a major advance that enables perfect security for various tasks (in particular, Byzantine Agreement in Synchrony and Reliable Broadcast in Asynchrony). For an input of size $L$ bits, its communication complexity is $O(nL n^2 \log n)$, which is optimal up to a $\log n$ factor.
Unfortunately, Chen’s analysis is rather intricate and complex.
Our main contribution is a simple analysis of a new variant of COOL based on elementary counting arguments.
Our main consistency proof takes less than two pages (instead of over 20 pages), making the COOL protocol much more accessible. In addition, the simple analysis allows us to improve the protocol by reducing one round of communication and reducing the communication complexity by 40%.
In addition, we suggest a new way of extracting the core properties of COOL as a new primitive, which we call Graded Dispersal. We show how Graded Dispersal can then be used to obtain efficient solutions for Byzantine Agreement, Verifiable Information Dispersal, Gradecast, and Reliable Broadcast (in both Synchrony and Asynchrony, where appropriate). Our improvement of COOL directly applies here, and we improve the state-of-the-art in all those primitives by reducing at least one round and 40% communication.
Circle STARKs
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve $x^2 y^2 = 1$. When $p 1$ is divisible by a large power of $2$, this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime $p = 2^{31} − 1$, which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of $1.4$ compared to a traditional STARK using the Babybear prime $p = 2^{31} − 2^{27} 1$.
General Practical Cryptanalysis of the Sum of Round-Reduced Block Ciphers and ZIP-AES
We introduce a new approach between classical security proofs of modes of operation and dedicated security analysis for known cryptanalysis families: General Practical Cryptanalysis. This allows us to analyze generically the security of the sum of two keyed permutations against known attacks. In many cases (of course, not all), we show that the security of the sum is strongly linked to that of the composition of the two permutations. This enables the construction of beyond-birthday bound secure low-latency PRFs by cutting a known-to-be-secure block cipher into two equal parts. As a side result, our general analysis shows an inevitable difficulty for the key recovery based on differential-type attacks against the sum, which leads to a correction of previously published attacks on the dedicated design Orthros.
Carousel: Fully Homomorphic Encryption from Slot Blind Rotation Technique
Fully Homomorphic Encryption (FHE) enables secure computation of functions on ciphertexts without requiring decryption. Specifically, AP-like HE schemes exploit an intrinsic bootstrapping method called blind rotation. In blind rotation, a look-up table is homomorphically evaluated on the input ciphertext through the iterative multiplication of monomials. However, the algebraic structure of the multiplicative group of monomials imposes certain limitations on the input and output plaintext space: 1. only a fraction of the input plaintext space can be bootstrapped, 2. the output plaintext space is restricted to subsets of real numbers.
In this paper, we design a novel bootstrapping method called slot blind rotation. The key idea of our approach is to utilize the automorphism group instead of monomials. More specifically, the look-up table is encoded into a single polynomial using SIMD (Single Instruction Multiple Data) packing and is rotated via a series of homomorphic multiplications and automorphisms. This method achieves two significant advantages: 1. the entire input plaintext space can be bootstrapped, 2. a more broad output plaintext space, such as complex numbers or finite field/rings can be supported.
Finally, we present a new HE scheme leveraging the slot blind rotation technique and provide a proof-of-concept implementation. We also demonstrate the the benchmark results and provide recommended parameter sets.
NLAT: the NonLinear Approximation Table of Vectorial Boolean Mappings
This paper studies an extension of the Linear Approximation Table (LAT) of vectorial Boolean mappings (also known as Substitution boxes) used in Linear Cryptanalysis (LC). This extended table is called NonLinear Approximation Table (NLAT).
Covert 19th century political intrigues of Tenerife nobility revealed by cryptanalyzing an encrypted letter
This article presents a cryptanalysis of a 19th-century encrypted manuscript discovered in the archives of Conde de Siete Fuentes in Tenerife, Canary Islands, Spain. The manuscript, preserved by the heirs of the 6th Count of Valle de Salazar, utilizes a polyalphabetic substitution cipher. The cryptanalysis was performed by applying statistical frequency analysis and developing a Python script for decryption, resulting in the authors successfully deciphering the message. The decrypted letter reveals political communications discussing the strategic positioning of Tenerife as the capital, the dissolution of local councils, and the influence of key political figures. The analysis compares the cipher with historical encryption techniques, and identifies the unique characteristics of the manuscript’s encryption method. The study highlights the political dynamics and alliances within Tenerife’s nobility and their interactions with the central Spanish government, providing significant insights into, both, the cryptographic practices and political maneuvers of the time.
Dynamic-FROST: Schnorr Threshold Signatures with a Flexible Committee
Threshold signatures enable any subgroup of predefined cardinality $t$ out of a committee of $n$ participants to generate a valid, aggregated signature.
Although several $(t,n)$-threshold signature schemes exist, most of them assume that the threshold $t$ and the set of participants do not change over time.
Practical applications of threshold signatures might benefit from the possibility of updating the threshold or the committee of participants. Examples of such applications are consensus algorithms and blockchain wallets.
In this paper, we present Dynamic-FROST (D-FROST, for short) that combines FROST, a Schnorr threshold signature scheme, with CHURP, a dynamic proactive secret sharing scheme. The resulting protocol is the first Schnorr threshold signature scheme that accommodates changes in both the committee and the threshold value without relying on a trusted third party.
Besides detailing the protocol, we present a proof of its security: as the original signing scheme, D-FROST preserves the property of Existential Unforgeability under Chosen-Message Attack.
Efficient theta-based algorithms for computing $(\ell, \ell)$-isogenies on Kummer surfaces for arbitrary odd $\ell$
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. Recently, many isogeny-based cryptosystems using isogenies between Kummer surfaces were proposed. Most of those cryptosystems use $(2,2)$-isogenies. However, to enhance the possibility of cryptosystems, higher degree isogenies, say $(\ell,\ell)$-isogenies for an odd $\ell$, is also crucial. For an odd $\ell$, the Lubicz-Robert gave a formula to compute $(\ell)^g$-isogenies in general dimension $g$. In this paper, we propose explicit and efficient algorithms to compute $(\ell,\ell)$-isogenies between Kummer surfaces, based on the Lubicz-Robert formula.In particular, we propose two algorithms for computing the codomain of the isogeny and two algorithms for evaluating the image of a point under the isogeny. Then, we count the number of arithmetic operations required for each of our proposed algorithms, and determine the most efficient algorithm in terms of the number of arithmetic operations from each of two types of algorithms for each $\ell$. As an application, using the most efficient one, we implemented the SIDH attack on B-SIDH in SageMath.In setting that originally claimed 128-bit security, our implementation was able to recover that secret key in about 11 hours.
Janus: Fast Privacy-Preserving Data Provenance For TLS
Web users can gather data from secure endpoints and demonstrate the provenance of sensitive data to any third party by using privacy-preserving TLS oracles. In practice, privacy-preserving TLS oracles remain limited and cannot selectively verify larger sensitive data sets. In this work, we introduce a new oracle protocol, which reaches new scales in selectively verifying the provenance of confidential web data. The novelty of our work is a construction which deploys an honest verifier zero-knowledge proof system in the asymmetric privacy setting while retaining security against malicious adversaries. Concerning TLS 1.3, we optimize the garble-then-prove paradigm in a security setting with malicious adversaries. Here, we show that a specific operation mode of TLS 1.3 allows to use semi-honest secure computations without authentic garbling for the majority of computations in the garble phase. Our performance improvements reach new efficiency scales in verifying private data provenance and facilitate the practical deployment of privacy-preserving TLS oracles in web browsers.
ORIGO: Proving Provenance of Sensitive Data with Constant Communication
Transport Layer Security ( TLS ) is foundational for safeguarding client-server communication. However, it does not extend integrity guarantees to third-party verification of data authenticity. If a client wants to present data obtained from a server, it cannot convince any other party that the data has not been tampered with.
TLS oracles ensure data authenticity beyond the client-server TLS connection, such that clients can obtain data from a server and ensure provenance to any third party, without server-side modifications. Generally, a TLS oracle involves a third party, the verifier, in a TLS session to verify that the data obtained by the client is accurate. Existing protocols for TLS oracles are communication-heavy, as they rely on interactive protocols. We present ORIGO, a TLS oracle with constant communication. Similar to prior work, ORIGO introduces a third party in a TLS session, and provides a protocol to ensure the authenticity of data transmitted in a TLS session, without forfeiting its confidentiality. Compared to prior work, we rely on intricate details specific to TLS 1.3, which allow us to prove correct key derivation, authentication and encryption within a Zero Knowledge Proof (ZKP). This, combined with optimizations for TLS 1.3, leads to an efficient protocol with constant communication in the online phase. Our work reduces online communication by $375 \times$ and online runtime by up to $4.6 \times$, compared to prior work.
Relating Code Equivalence to Other Isomorphism Problems
We study the complexity of the Code Equivalence Problem on linear error-correcting codes by relating its variants to isomorphism problems on other discrete structures---graphs, lattices, and matroids. Our main results are a fine-grained reduction from the Graph Isomorphism Problem to the Linear Code Equivalence Problem over any field $\mathbb{F}$, and a reduction from the Linear Code Equivalence Problem over any field $\mathbb{F}_p$ of prime, polynomially bounded order $p$ to the Lattice Isomorphism Problem. Both of these reductions are simple and natural. We also give reductions between variants of the Code Equivalence Problem, and study the relationship between isomorphism problems on codes and linear matroids.
Trustworthy Approaches to RSA: Efficient Exploitation Strategies Based on Common Modulus
With the increasing integration of crowd computing, new vulnerabilities emerge in widely used cryptographic systems like the RSA cryptosystem, whose security is based on the factoring problem. It is strongly advised to avoid using the same modulus to produce two pairs of public-private keys, as the cryptosystem would be rendered vulnerable to common modulus attacks. Such attacks can take two forms: one that aims to factorize the common modulus based on one key pair and the other that aims to decrypt certain ciphertexts generated by two public keys if the keys are co-prime. This paper introduces a new type of common modulus attack on the RSA cryptosystem. In our proposed attack, given one public-private key pair, an attacker can obtain the private key corresponding to a given public key in RSA decryption. This allows the adversary to decrypt any ciphertext generated using this public key. It is worth noting that the proposed attack can be used in the CRT model of RSA. In addition, we propose a parallelizable factoring algorithm with an order equivalent to a cyclic attack in the worst-case scenario.
Security Analysis of ASCON Cipher under Persistent Faults
This work investigates persistent fault analysis on ASCON
cipher that has been recently standardized by NIST USA for lightweight
cryptography applications. In persistent fault, the fault once injected
through RowHammer injection techniques, exists in the system during
the entire encryption phase. In this work, we propose a model to mount
persistent fault analysis (PFA) on ASCON cipher. In the finalization
round of the ASCON cipher, we identify that the fault-injected S-Box
operation in the permutation round, $p^{12}$, is vulnerable to leaking infor-
mation about the secret key. The model can exist in two variants, a single
instance of fault-injected S-Box out of 64 parallel S-Box invocations, and
the same faulty S-Box iterated 64 times. The attack model demonstrates
that any Spongent construction operating with authenticated encryption
with associated data (AEAD) mode is vulnerable to persistent faults. In
this work, we demonstrate the scenario of a single fault wherein the fault,
once injected is persistent until the device is powered off. Using the pro-
posed method, we successfully retrieve the 128-bit key in ASCON. Our
experiments show that the minimum number and the maximum num-
ber of queries required are 63 plaintexts and 451 plaintexts, respectively.
Moreover, we observe that the number of queries required to mount the
attack depends on fault location in the S-box LUT as observed from the
plots reporting the minimum number of queries and average number of
queries for 100 key values.
Batched Distributed Point Function from Sparse LPN and Homomorphic Secret Sharing
A function secret sharing (FSS) scheme ($\mathsf{gen},\mathsf{eval}$) for a class of programs $\mathcal{F}$ allows a dealer to secret share any function $f \in \mathcal{F}$, such that each function share hides the function, and the shares can be used to non-interactively compute additive shares of $f(x)$ for any input $x$. All FSS related applications often requires the dealer to generate and share secret sharings for a batch of functions.
We initiate the study of batched function secret sharing - where the objective is to secret share a set of functions from the class $\mathcal{F}$ while minimizing the size of the collection of FSS keys.
We use standard homomorphic secret sharing (HSS) schemes, variant of the Learning with Parity Noise assumption and the Quadratic Residuosity assumption to construct batched FSS schemes for point functions with single-bit and multi-bit output. Our scheme is asymptotically superior than naively batching state of the art FSS schemes for point functions. Concretely our batch key sizes are smaller by a factor of $3-80\times$ as batch size is increased from $2^{13}$ to $2^{19}$. Although our protocol relies on public key operations, it exhibits inefficiency in a LAN setting. However, it demonstrates up to a 120-fold improvement in a WAN setting with slow network bandwidth.
As a building block in our protocols, we introduce a new HSS ciphertext compression algorithm, that can decompress a short compressed ciphertext to give low noise ciphertexts of array of input message bits. This primitive might be of independent interest for other HSS related applications.
On the Security of Nova Recursive Proof System
Nova is a new type of recursive proof system that uses a folding scheme as its core building block. This brilliant idea of folding relations can significantly reduce the recursion overhead. In this paper, we study some issues related to Nova’s soundness proof, which relies on the soundness of the folding scheme in a recursive manner.
First, due to its recursive nature, the proof strategy inevitably causes the running time of the recursive extractor to expand polynomially for each additional recursive step. This constrains Nova's soundness model to only logarithmically bounded recursive steps. Consequently, the soundness proof in this limited model does not guarantee soundness for a linear number of rounds in the security parameter, such as 128 rounds for 128-bit security. On the other hand, there are no known attacks on the arbitrary depth recursion of Nova, leaving a gap between theoretical security guarantees and real-world attacks. We aim to bridge this gap in two opposite directions. In the negative direction, we present a recursive proof system that is unforgeable in a log-round model but forgeable if used in linear rounds. This shows that the soundness proof in the log-round model might not be applicable to real-world applications that require linearly long rounds. In a positive direction, we show that when Nova uses a specific group-based folding scheme, its knowledge soundness over polynomial rounds can be proven in the algebraic group model with our modifications. To the best of our knowledge, this is the first result to show Nova's polynomial rounds soundness.
Second, the folding scheme is converted non-interactively via the Fiat-Shamir transformation and then arithmetized into R1CS. Therefore, the soundness of Nova using the non-interactive folding scheme essentially relies on the heuristic random oracle instantiation in the standard model. In our new soundness proof for Nova in the algebraic group model, we replace this heuristic with a cryptographic hash function with a special property. We view this hash function as an independent object of interest and expect it to help further understand the soundness of Nova.
Qubit Optimized Quantum Implementation of SLIM
The advent of quantum computing has profound implications for current technologies, offering advancements in optimization while posing significant threats to cryptographic algorithms. Public-key cryptosystems relying on prime factorization or discrete logarithms are particularly vulnerable, whereas block ciphers (BCs) remain secure through increased key lengths. In this study, we introduce a novel quantum implementation of SLIM, a lightweight block cipher optimized for 32-bit plaintext and an 80-bit key, based on a Feistel structure. This implementation distinguishes itself from other BC quantum implementations in its class (64–128-bit) by utilizing a minimal number of qubits while maintaining robust cryptographic strength and efficiency. By employing an innovative design that minimizes qubit usage, this work highlights SLIM’s potential as a resource-efficient and secure candidate for quantum-resistant encryption protocols.
Ballot secrecy: Security definition, sufficient conditions, and analysis of Helios
Let's formalise ballot secrecy as a game between a benign challenger, malicious adversary, and voting system, the adversary tasked to break security, make distinction between observed world and some parallel one, only the challenger knowing which world is under observation: Our formalisation improves earlier work to ensure detection of attacks when ballot collection is adversary controlled. We also formalise ballot independence (from asymmetric encryption's security game), and prove independence suffices for secrecy in voting systems with zero-knowledge tallying proofs. Using that proof simplification, we present blueprints for construction of non-malleable encryption based voting systems with certified ballot secrecy. Additionally, we analyse the Helios voting system and its mixnet variant, finding secrecy isn't satisfied by Helios, earlier techniques missing the attack because tallying algorithm inputs are assumed uncompromised, implicitly requiring all ballot processing be trusted, which we like to avoid, rather than assuming risk. Our blueprint guides construction of a variant proven to ensure secrecy.
Impact Tracing: Identifying the Culprit of Misinformation in Encrypted Messaging Systems
Encrypted messaging systems obstruct content moderation, although they provide end-to-end security. As a result, misinformation proliferates in these systems, thereby exacerbating online hate and harassment. The paradigm of ``Reporting-then-Tracing" shows great potential in mitigating the spread of misinformation. For instance, message traceback (CCS'19) traces all the dissemination paths of a message, while source tracing (CCS'21) traces its originator. However, message traceback lacks privacy preservation for non-influential users (e.g., users who only receive the message once), while source tracing maintains privacy but only provides limited traceability.
In this paper, we initiate the study of impact tracing. Intuitively, impact tracing traces influential spreaders central to disseminating misinformation while providing privacy protection for non-influential users. We introduce noises to hide non-influential users and demonstrate that these noises do not hinder the identification of influential spreaders. Then, we formally prove our scheme's security and show it achieves differential privacy protection for non-influential users. Additionally, we define three metrics to evaluate its traceability, correctness, and privacy using real-world datasets. The experimental results show that our scheme identifies the most influential spreaders with accuracy from 82% to 99% as the amount of noise varies. Meanwhile, our scheme requires only a 6-byte platform storage overhead for each message while maintaining a low messaging latency (< 0.25ms).
Orbweaver: Succinct Linear Functional Commitments from Lattices
We present Orbweaver, a plausibly post-quantum functional commitment for linear relations that achieves quasilinear prover time together with $O(\log n)$ proof size and polylogarithmic verifier time. Orbweaver enables evaluation of linear functions on committed vectors over cyclotomic rings and the integers. It is extractable, preprocessing, non-interactive, structure-preserving, and supports compact public proof aggregation. The security of our scheme is based on the $k$-$R$-ISIS assumption (and its knowledge counterpart), whereby we require a trusted setup to generate a universal structured reference string. We use Orbweaver to construct succinct univariate and multilinear polynomial commitments.
Concretely, our scheme has smaller proofs than most other succinct post-quantum arguments for large statements. For binary vectors of length $2^{30}$ we achieve $302$KiB linear map evaluation proofs with evaluation binding, and $1$MiB proofs when extractability is required; for $32$-bit integers these sizes are $494$KiB and $1.6$MiB, respectively.
Derecho: Privacy Pools with Proof-Carrying Disclosures
A privacy pool enables clients to deposit units of a cryptocurrency into a shared pool where ownership of deposited currency is tracked via a system of cryptographically hidden records. Clients may later withdraw from the pool without linkage to previous deposits. Some privacy pools also support hidden transfer of currency ownership within the pool. In August 2022, the U.S. Department of Treasury sanctioned Tornado Cash, the largest Ethereum privacy pool, on the premise that it enables illicit actors to hide the origin of funds, citing its usage by the DPRK-sponsored Lazarus Group to launder over \$455 million dollars worth of stolen cryptocurrency. This ruling effectively made it illegal for U.S. persons/institutions to use or accept funds that went through Tornado Cash, sparking a global debate among privacy rights activists and lawmakers. Against this backdrop, we present Derecho, a system that institutions could use to request cryptographic attestations of fund origins rather than naively rejecting all funds coming from privacy pools. Derecho is a novel application of proof-carrying data, which allows users to propagate allowlist membership proofs through a privacy pool's transaction graph. Derecho is backwards-compatible with existing Ethereum privacy pool designs, adds no overhead in gas costs, and costs users only a few seconds to produce attestations.
Mira: Efficient Folding for Pairing-based Arguments
Pairing-based arguments offer remarkably small proofs and space-efficient provers, but aggregating such proofs remains costly. Groth16 SNARKs and KZG polynomial commitments are prominent examples of this class of arguments. These arguments are widely deployed in decentralized systems, with millions of proofs generated per day. Recent folding schemes have greatly reduced the cost of proving incremental computations, such as batch proof verification. However, existing constructions require encoding pairing operations in generic constraint systems, leading to high prover overhead. In this work, we introduce Mira, a folding scheme that directly supports pairing-based arguments. We construct this folding scheme by generalizing the framework in Protostar to support a broader class of special-sound protocols. We demonstrate the versatility and efficiency of this framework through two key applications: Groth16 proof aggregation and verifiable ML inference. Mira achieves 5.8x faster prover time and 9.7x lower memory usage than the state-of-the-art proof aggregation system while maintaining a constant-size proof. To improve the efficiency of verifiable ML inference, we provide a new lincheck protocol with a verifier degree that is independent of the matrix order. We show that Mira scales effectively to larger models, overcoming the memory bottlenecks of current schemes.
Hash-Prune-Invert: Improved Differentially Private Heavy-Hitter Detection in the Two-Server Model
Differentially private (DP) heavy-hitter detection is an important primitive for data analysis. Given a threshold $t$ and a dataset of $n$ items from a domain of size $d$, such detection algorithms ignore items occurring fewer than $t$ times while identifying items occurring more than $t \Delta$ times; we call $\Delta$ the error margin. In the central model where a curator holds the entire dataset, $(\varepsilon,\delta)$-DP algorithms can achieve error margin $\Theta(\frac 1 \varepsilon \log \frac 1 \delta)$, which is optimal when $d \gg 1/\delta$.
Several works, e.g., Poplar (S&P 2021), have proposed protocols in which two or more non-colluding servers jointly compute the heavy hitters from inputs held by $n$ clients. Unfortunately, existing protocols suffer from an undesirable dependence on $\log d$ in terms of both server efficiency (computation, communication, and round complexity) and accuracy (i.e., error margin), making them unsuitable for large domains (e.g., when items are kB-long strings, $\log d \approx 10^4$).
We present hash-prune-invert (HPI), a technique for compiling any heavy-hitter protocol with the $\log d$ dependencies mentioned above into a new protocol with improvements across the board: computation, communication, and round complexity depend (roughly) on $\log n$ rather than $\log d$, and the error margin is independent of $d$. Our transformation preserves privacy against an active adversary corrupting at most one of the servers and any number of clients. We apply HPI to an improved version of Poplar, also introduced in this work, that improves Poplar's error margin by roughly a factor of $\sqrt{n}$ (regardless of $d$). Our experiments confirm that the resulting protocol improves efficiency and accuracy for large $d$.
An Abstract Multi-Forking Lemma
In this work we state and prove an abstract version of the multi-forking lemma of Pointcheval and Stern from EUROCRYPT'96. Earlier, Bellare and Neven had given an abstract version of forking lemma for two-collisions (CCS'06). While the original purpose of the forking lemma was to prove security of signature schemes in the random oracle methodology, the abstract forking lemma can be used to obtain security proofs for multi-signatures, group signatures, and compilation of interactive protocols under the Fiat-Shamir random-oracle methodology.
The Revisited Hidden Weight Bit Function
The Hidden Weight Bit Function (HWBF) has drawn considerable attention for its simplicity and cryptographic potential. Despite its ease of implementation and favorable algebraic properties, its low nonlinearity limits its direct application in modern cryptographic designs. In this work, we revisit the HWBF and propose a new weightwise quadratic variant obtained by combining the HWBF with a bent function. This construction offers improved cryptographic properties while remaining computationally efficient. We analyze the balancedness, nonlinearity, and other criteria of this function, presenting theoretical bounds and experimental results to highlight its advantages over existing functions in similar use cases. The different techniques we introduce to study the nonlinearity of this function also enable us to bound the nonlinearity of a broad family of weightwise quadratic functions, both theoretically and practically. We believe these methods are of independent interest.
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form $\mathbb{Z}[x]/(\Phi_m(x), t(x))$ with $\Phi_m(x)$ the $m$-th cyclotomic polynomial and $t(x)$ an arbitrary polynomial. GBFV encompasses both BFV where $t(x) = p$ is a constant, and the CLPX scheme (CT-RSA 2018) where $m = 2^k$ and $t(x) = x-b$ is a linear polynomial. The latter can encrypt a single huge integer modulo $\Phi_m(b)$, has much lower noise growth than BFV (linear in $m$ instead of exponential), but cannot be bootstrapped.
We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems.
Due to the lower noise growth, e.g. for the Goldilocks prime, GBFV can evaluate circuits whose multiplicative depth is more than $5$ times larger than native BFV. As a result, we can evaluate either larger circuits or work with much smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security for a large prime, already at ring dimension $2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only $2$ seconds to bootstrap a ciphertext encrypting up to $8192$ elements modulo $2^{16} 1$.
CHVote Protocol Specification
This document provides a self-contained, comprehensive, and fully-detailed specification of a new cryptographic voting protocol designed for political elections in Switzerland. The document describes every relevant aspect and every necessary technical detail of the computations and communications performed by the participants during the protocol execution. To support the general understanding of the cryptographic protocol, the document accommodates the necessary mathematical and cryptographic background information. By providing this information to the maximal possible extent, it serves as an ultimate companion document for the developers in charge of implementing this system. It may also serve as a manual for developers trying to implement an independent election verification software. The decision of making this document public even enables implementations by third parties, for example by students trying to develop a clone of the system for scientific evaluations or to implement protocol extensions to achieve additional security properties. In any case, the target audience of this document are system designers, software developers, and cryptographic experts.
RoK, Paper, SISsors – Toolkit for Lattice-based Succinct Arguments
Lattice-based succinct arguments allow to prove bounded-norm satisfiability of relations, such as $f(\vec{s}) = \vec{t} \bmod q$ and $\|\vec{s}\|\leq \beta$, over specific cyclotomic rings $\mathcal{O}_\mathcal{K}$, with proof size polylogarithmic in the witness size. However, state-of-the-art protocols require either 1) a super-polynomial size modulus $q$ due to a soundness gap in the security argument, or 2) a verifier which runs in time linear in the witness size. Furthermore, construction techniques often rely on specific choices of $\mathcal{K}$ which are not mutually compatible. In this work, we exhibit a diverse toolkit for constructing efficient lattice-based succinct arguments:
(i) We identify new subtractive sets for general cyclotomic fields $\mathcal{K}$ and their maximal real subfields $\mathcal{K}^ $, which are useful as challenge sets, e.g. in arguments for exact norm bounds.
(ii) We construct modular, verifier-succinct reductions of knowledge for the bounded-norm satisfiability of structured-linear/inner-product relations, without any soundness gap, under the vanishing SIS assumption, over any $\mathcal{K}$ which admits polynomial-size subtractive sets.
(iii) We propose a framework to use twisted trace maps, i.e. maps of the form $\tau(z) = \frac{1}{N} \cdot \mathsf{Trace}_{\mathcal{K}/\mathbb{Q}}( \alpha \cdot z )$, to embed $\mathbb{Z}$-inner-products as $\mathcal{R}$-inner-products for some structured subrings $\mathcal{R} \subseteq \mathcal{O}_\mathcal{K}$ whenever the conductor has a square-free odd part.
(iv) We present a simple extension of our reductions of knowledge for proving the consistency between the coefficient embedding and the Chinese Remainder Transform (CRT) encoding of $\vec{s}$ over any cyclotomic field $\mathcal{K}$ with a smooth conductor, based on a succinct decomposition of the CRT map into automorphisms, and a new, simple succinct argument for proving automorphism relations.
Combining all techniques, we obtain, for example, verifier-succinct arguments for proving that $\vec{s}$ satisfying $f(\vec{s}) = \vec{t} \bmod q$ has binary coefficients, without soundness gap and with polynomial-size modulus $q$.
PrivQuant: Communication-Efficient Private Inference with Quantized Network/Protocol Co-Optimization
Private deep neural network (DNN) inference based on secure two-party computation (2PC) enables secure privacy protection for both the server and the client. However, existing secure 2PC frameworks suffer from a high inference latency due to enormous communication. As the communication of both linear and non-linear DNN layers reduces with the bit widths of weight and activation, in this paper, we propose PrivQuant, a framework that jointly optimizes the 2PC-based quantized inference protocols and the network quantization algorithm, enabling communication-efficient private inference. PrivQuant proposes DNN architecture-aware optimizations for the 2PC protocols for communication-intensive quantized operators and conducts graph-level operator fusion for communication reduction. Moreover, PrivQuant also develops a communication-aware mixed precision quantization algorithm to improve the inference efficiency while maintaining high accuracy. The network/protocol co-optimization enables PrivQuant to outperform prior-art 2PC frameworks. With extensive experiments, we demonstrate PrivQuant reduces communication by $11\times, 2.5\times \mathrm{and}~ 2.8\times$, which results in $8.7\times, 1.8\times ~ \mathrm{and}~ 2.4\times$ latency reduction compared with SiRNN, COINN, and CoPriv, respectively.
Ring Ring! Who's There? A Privacy Preserving Mobile Number Search
Private set intersection (PSI) allows any two parties (say client and server) to jointly compute the intersection of their sets without revealing anything else. Fully homomorphic encryption (FHE)-based PSI is a cryptographic solution to implement PSI-based protocols. Most FHE-based PSI protocols implement hash function approach and oblivious transfer approach. The main limitations of their protocols are 1) high communication complexity, that is, $O(xlogy)$ (where $x$ is total number of elements on client side, and $y$ is total number of elements on server side), and 2) high memory usage due to SIMD packing for encrypting large digit numbers. In this work, we design a novel tree-based approach to store the large digit numbers that achieves less communication complexity, that is, $O(|d|^{2})$ (where $d$ is digits of a mobile number). Later we implement our protocol using Tenseal library. Our designed protocol opens the door to find the common elements with less communication complexity and less memory usage.
Regev's attack on hyperelliptic cryptosystems
Hyperelliptic curve cryptography (HECC) is a candidate to standardization which is a competitive alternative to elliptic curve cryptography (ECC). We extend Regev's algorithm to this setting. For genus-two curves relevant to cryptography, this yields a quantum attack up to nine times faster than the state-of-the-art. This implies that HECC is slightly weaker than ECC. In a more theoretical direction, we show that Regev's algorithm obtains its full speedup with respect to Shor's when the genus is high, a setting which is already known to be inadequate for cryptography.
On the Traceability of Group Signatures: Uncorrupted User Must Exist
Group signature (GS) is a well-known cryptographic primitive providing anonymity and traceability. Several implication results have been given by mainly focusing on the several security levels of anonymity, e.g., fully anonymous GS implies public key encryption (PKE) and selfless anonymous GS can be constructed from one-way functions and non-interactive zero knowledge poofs, and so on. In this paper, we explore an winning condition of full traceability: an adversary is required to produce a valid group signature whose opening result is an uncorrupted user. We demonstrate a generic construction of GS secure in the Bellare-Micciancio-Warinschi (BMW) model except the above condition from PKE only. We emphasize that the proposed construction is quite artificial and meaningless in practice because the verification algorithm always outputs 1 regardless of the input. This result suggests us the winning condition is essential in full traceability, i.e., an uncorrupted user must exist. We also explore a public verifiability of GS-based PKE scheme and introduce a new formal security definition of public verifiability by following BUFF (Beyond UnForgeability Features) security. Our definition guarantees that the decryption result of a valid cyphertext is in the message space specified by the public key. We show that the GS-based PKE scheme is publicly verifiable if the underlying GS scheme is fully traceable.