Selected Areas in Cryptography SAC 2017 pdf pdf

  Carlisle Adams (Eds.) Jan Camenisch Selected Areas in Cryptography –

  LNCS 10719 SAC 2017 24th International Conference Ottawa, ON, Canada, August 16–18, 2017 Revised Selected Papers

  

Lecture Notes in Computer Science 10719

  Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

  Editorial Board

  David Hutchison Lancaster University, Lancaster, UK

  Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA

  Josef Kittler University of Surrey, Guildford, UK

  Jon M. Kleinberg Cornell University, Ithaca, NY, USA

  Friedemann Mattern ETH Zurich, Zurich, Switzerland

  John C. Mitchell Stanford University, Stanford, CA, USA

  Moni Naor Weizmann Institute of Science, Rehovot, Israel

  C. Pandu Rangan Indian Institute of Technology, Madras, India

  Bernhard Steffen TU Dortmund University, Dortmund, Germany

  Demetri Terzopoulos University of California, Los Angeles, CA, USA

  Doug Tygar University of California, Berkeley, CA, USA

  Gerhard Weikum Max Planck Institute for Informatics, Saarbrücken, Germany More information about this series at

  • Carlisle Adams Jan Camenisch (Eds.)

  Selected Areas in Cryptography – SAC 2017 24th International Conference

Ottawa, ON, Canada, August 16–18, 2017 Revised Selected Papers Editors Carlisle Adams Jan Camenisch School of Electrical Engineering

  IBM Research - Zurich and Computer Science (SITE) Rueschlikon University of Ottawa Switzerland Ottawa, ON Canada

ISSN 0302-9743

  ISSN 1611-3349 (electronic) Lecture Notes in Computer Science

ISBN 978-3-319-72564-2

  ISBN 978-3-319-72565-9 (eBook) https://doi.org/10.1007/978-3-319-72565-9 Library of Congress Control Number: 2017962894 © LNCS Sublibrary: SL4 – Security and Cryptology Springer International Publishing AG 2018

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the

material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,

broadcasting, reproduction on microfilms or in any other physical way, and transmission or information

storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now

known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication

does not imply, even in the absence of a specific statement, that such names are exempt from the relevant

protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are

believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors

give a warranty, express or implied, with respect to the material contained herein or for any errors or

omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in

published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG

  

Preface

  The Conference on Selected Areas in Cryptography (SAC) is the leading Canadian venue for the presentation and publication of cryptographic research. The 24th annual SAC was held this year at the University of Ottawa, Ontario (for the second time; the first was in 2007). In keeping with its tradition, SAC 2017 offered a relaxed and collegial atmosphere for researchers to present and discuss new results.

  SAC has three regular themes: Design and analysis of symmetric key primitives and cryptosystems, including block and stream ciphers, hash functions, MAC algorithms, and authenticated encryption schemes Efficient implementations of symmetric and public key algorithms Mathematical and algorithmic aspects of applied cryptology

  The following special (or focus) theme for this year was: Post-quantum cryptography A total of 66 submissions were received, out of which the Program Committee selected 23 papers for presentation. It is our pleasure to thank the authors of all the submissions for the high quality of their work. The review process was thorough (each submission received the attention of at least three reviewers, and at least five for submissions involving a Program Committee member).

  There were two invited talks. The Stafford Tavares Lecture was given by Helena Handschuh, who presented “Test Vector Leakage Assessment Methodology: An Update,” and the second invited talk was given by Chris Peikert, who presented “Lattice Cryptography: From Theory to Practice, and Back Again.”

  This year, SAC hosted what is now the third iteration of the SAC Summer School (S3). S3 is intended to be a place where young researchers can increase their knowl- edge of cryptography through instruction by, and interaction with, leading researchers.

  This year, we were fortunate to have Michele Mosca, Douglas Stebila, and David Jao presenting post-quantum cryptographic algorithms, Tanja Lange and Daniel J. Bern- stein presenting public key cryptographic algorithms, and Orr Dunkelman presenting symmetric key cryptographic algorithms. We would like to express our sincere grati- tude to these six presenters for dedicating their time and effort to what has become a highly anticipated and highly beneficial event for all participants.

  Finally, the members of the Program Committee, especially the co-chairs, would like to thank the additional reviewers, who gave generously of their time to assist with the paper review process. We are also very grateful to our sponsors, Microsoft and Communications Security Establishment, whose enthusiastic support (both financial and otherwise) greatly contributed to the success of SAC this year.

  October 2017 Jan Camenisch

  SAC 2017

  The 24th Annual Conference on Selected Areas in Cryptography Ottawa, Ontario, Canada, August 16–18, 2017

  Program Chairs

  Carlisle Adams University of Ottawa, Canada Jan Camenisch

  IBM Research - Zurich, Switzerland

  Program Committee

  Carlisle Adams (Co-chair) University of Ottawa, Canada Shashank Agraval Visa Research, USA Elena Andreeva COSIC, KU Leuven, Belgium Kazumaro Aoki NTT, Japan Jean-Philippe Aumasson Kudelski Security, Switzerland Roberto Avanzi ARM, Germany Manuel Barbosa HASLab - INESC TEC and FCUP, Portugal Paulo Barreto University of São Paulo, Brazil Andrey Bogdanov Technical University of Denmark, Denmark Billy Brumley Tampere University of Technology, Finland Jan Camenisch (Co-chair)

  IBM Research - Zurich, Switzerland Itai Dinur Ben-Gurion University, Israel Maria Dubovitskaya

  IBM Research - Zurich, Switzerland Guang Gong University of Waterloo, Canada Johann Groszschaedl University of Luxembourg, Luxembourg Tim Güneysu University of Bremen and DFKI, Germany M. Anwar Hasan University of Waterloo, Canada Howard Heys Memorial University, Canada Laurent Imbert CNRS, LIRMM, Université Montpellier 2, France Michael Jacobson University of Calgary, Canada Elif Bilge Kavun

  Infineon Technologies AG, Germany Stephan Krenn Austrian Institute of Technology GmbH, Austria Juliane Krämer Technische Universität Darmstadt, Germany Thijs Laarhoven

  IBM Research - Zurich, Switzerland Gaëtan Leurent Inria, France Petr Lisonek Simon Fraser University, Canada María Naya-Plasencia Inria, France Francesco Regazzoni ALaRI - USI, Switzerland Palash Sarkar Indian Statistical Institute, India Joern-Marc Schmidt Secunet Security Networks AG, Germany

VIII SAC 2017

  Kyoji Shibutani Sony Corporation, Japan Francesco Sica Nazarbayev University, Kazakhstan Daniel Slamanig Graz University of Technology, Austria Meltem Sonmez Turan National Institute of Standards and Technology, USA Michael Tunstall Cryptography Research, USA Vanessa Vitse Université Joseph Fourier - Grenoble I, France Bo-Yin Yang Academia Sinica, Taiwan Amr Youssef Concordia University, Canada

  Additional Reviewers

  Ahmed Abdel Khalek Cecilia Boschini Cagdas Calik André Chailloux Jie Chen Yao Chen Deirdre Connolly Rafaël Del Pino Christoph Dobraunig Benedikt Driessen Léo Ducas Maria Eichlseder Guillaume Endignoux Tommaso Gagliardoni Romain Gay Florian Goepfert Michael Hamburg Harunaga Hiwatari Akinori Hosoyamada Andreas Hülsing Takanori Isobe Thorsten Kleinjung Moon Sung Lee Aaron Lye Kalikinkar Mandal

  Oliver Mischke Nicky Mouha Christophe Negre Tobias Oder Towa Patrick Cesar Pereida Garcia Peter Pessl Thomas Pöppelmann Sebastian Ramacher Tobias Schneider André Schrottenloher Gregor Seiler Sohaib Ul Hassan Christoph Striecks Cihangir Tezcan David Thomson Jean-Pierre Tillich Yosuke Todo Mohamed Tolba Nicola Tuveri Christine van Vredendaal David J. Wu Jiming Xu Randy Yee Wenying Zhang

  

Contents

  Discrete Logarithms . . . . . . . . . . . . . . . . . .

  . . . . . .

   Hairong Yi, Yuqing Zhu, and Dongdai Lin Key Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

   Reza Azarderakhsh, David Jao, and Christopher Leonardi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

   Brian Koziel, Reza Azarderakhsh, and David Jao Theory 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . p

   Laurent Grémy, Aurore Guillevic, François Morain, and Emmanuel Thomé

   Bailey Kacsmar, Sarah Plosker, and Ryan Henry

  Efficient Implementation

  Riham AlTawy, Raghvendra Rohit, Morgan He, Kalikinkar Mandal, Gangqiang Yang, and Guang Gong

   Jean-Claude Bajard, Julien Eynard, Anwar Hasan, Paulo Martins, Leonel Sousa, and Vincent Zucca X Contents

   Thomaz Oliveira, Julio López, Hüseyin Hışıl, Armando Faz-Hernández, and Francisco Rodríguez-Henríquez

   Markku-Juhani O. Saarinen

  Public Key Encryption

  Koichiro Akiyama, Yasuhiro Goto, Shinya Okumura, Tsuyoshi Takagi, Koji Nuida, and Goichiro Hanaoka

   Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange, and Christine van Vredendaal

  Signatures

  Edward Eaton

  Amir Jalali, Reza Azarderakhsh, and Mehran Mozaffari-Kermani

  Leon Groot Bruinderink and Andreas Hülsing Cryptanalysis

  Gustavo Banegas and Daniel J. Bernstein

  Robin Kwant, Tanja Lange, and Kimberley Thissen

  Ray Perlner, Albrecht Petzoldt, and Daniel Smith-Tone e

   Xavier Bonnetain

   Daniel P. Martin, Ashley Montanaro, Elisabeth Oswald, and Dan Shepherd

   Mohamed Tolba, Ahmed Abdelkhalek, and Amr M. Youssef

   Xinping Zhou, Carolyn Whitnall, Elisabeth Oswald, Degang Sun, and Zhu Wang

  Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

  Contents

  XI Discrete Logarithms

  

Second Order Statistical Behavior

of LLL and BKZ

  Cryptanalysis ·

   ]. It is subject to many heuristic optimizations, such

  tical lattice reduction algorithm. Schnorr and Euchner first put forward the orig- inal BKZ algorithm in

  were proposed successively. Currently, BKZ is the most prac-

  block-wise reduction appeared and several block-wise lattice reduction algo- rithms

   ], was invented by Lenstra, Lenstra and Lov´ asz. Then, the idea of

  The goal of lattice reduction is to find a basis with short and nearly orthog- onal vectors. In 1982, the first polynomial time lattice reduction algorithm, LLL

   ].

  Lattice reduction is a powerful algorithmic tool for solving a wide range of prob- lems ranging from integer optimization problems and problems from algebra or number theory. Lattice reduction has played a role in the cryptanalysis of cryptosystems not directly related to lattices, and is now even more relevant to quantifying the security of lattice-based cryptosystems

  Statistics

  BKZ ·

  Yang Yu

  

LLL

·

  Keywords: Lattice reduction ·

  The LLL algorithm (from Lenstra, Lenstra and Lov´ asz) and

its generalization BKZ (from Schnorr and Euchner) are widely used in

cryptanalysis, especially for lattice-based cryptography. Precisely under-

standing their behavior is crucial for deriving appropriate key-size for

cryptographic schemes subject to lattice-reduction attacks. Current mod-

els, e.g. the Geometric Series Assumption and Chen-Nguyen’s BKZ-

simulator, have provided a decent first-order analysis of the behavior of

LLL and BKZ. However, they only focused on the average behavior and

were not perfectly accurate. In this work, we initiate a second order analy-

sis of this behavior. We confirm and quantify discrepancies between mod-

els and experiments —in particular in the head and tail regions— and

study their consequences. We also provide variations around the mean

and correlations statistics, and study their impact. While mostly based

on experiments, by pointing at and quantifying unaccounted phenomena,

our study sets the ground for a theoretical and predictive understanding

of LLL and BKZ performances at the second order.

  Abstract.

  ) 1 Department of Computer Science and Technology,

Tsinghua University, Beijing, China

y-y13@mails.tsinghua.edu.cn

2 Cryptology Group, CWI, Amsterdam, The Netherlands

ducas@cwi.nl

  2( B

  and L´eo Ducas

  )

  1( B

1 Introduction

4 Y. Yu and L. Ducas

  All such improvements have been combined in the so-called BKZ 2.0 algorithm of Chen and Nguyen

  (progressive strategy was improved further in later

  work

  of BKZ algorithms have been

  made to explore and predict the performance of BKZ algorithms, which provide rough security estimations for lattice-based cryptography.

  Despite of their popularity, the behavior of lattice reduction algorithms is still not completely understood. While there are reasonable models (e.g. the Geometric Series Assumption

  ), there are few studies on

  the experimental statistical behavior of those algorithms, and they considered rather outdated versions of those algorithms

  . The accuracy of the cur- rent model remains unclear.

  This state of affair is quite problematic to evaluate accurately the concrete security level of lattice-based cryptosystem proposal. With the recent calls for post-quantum schemes by the NIST, this matter seems pressing. Our Contribution. In this work, we partially address this matter, by proposing a second-order statistical (for random input bases) analysis of the behavior of reduction algorithms in practice, qualitatively and quantitatively. We figure out one more low order term in the predicted average value of several quantities such as the root Hermite factor. Also, we investigate the variation around the average behavior, a legitimate concern raised by Micciancio and Walter

   ].

  In more details, we experimentally study the logarithms of ratios between two adjacent Gram-Schmidt norms in LLL and BKZ-reduced basis (denoted r ’s

  i

  below). We highlight three ranges for the statistical behavior of the r : the head

  i

  (i ≤ h), the body (h < i < n − t) and the tail (i ≥ n − t). The lengths of the head and tail are essentially determined by the blocksize β. In the body range, the statistical behavior of the r ’s are similar: this does not only provide new

  i

  support for the so-called Geometric Series Assumption

  

  ] when β ≪ n, but also a refinement of it applicable even when β ≪ n. We note in particular that the impact of the head on the root Hermite factor is much stronger than the impact of the tail.

  We also study the variance and the covariance between the r ’s. We observe a

  i

  local correlation between the r i ’s. More precisely we observe that r i and r i+1 are negatively correlated, inducing a self-stabilizing behavior of those algorithms: the overall variance is less than the sum of local variances. n

  ⌊ 2 ⌋

  Then, we measure the half-volume, i.e. b i , a quantity determining

  i=1

  the cost of enumeration on reduced basis. By expressing the half-volume using the statistics of the r i ’s, we determine that the complexity of enumeration on 2 1 .5

  an ±bn

  BKZ-reduced basis should be of the form 2 : the variation around average (denoted by ±) can impact the speed of enumeration by a super-exponential factor.

  ,

  and conclude that the simulator can predict the body of the profile and the tail 1 The variance statistics are not comparable to the simulator

  whose results are

“deterministic”, in the sense that the simulator’s result starting on the Hermite Second Order Statistical Behavior of LLL and BKZ

  5

  phenomenon qualitatively and quantitatively, but the head phenomenon is not captured. Thus it is necessary to revise the security estimation and refine the simulator. Impact. Our work points at several inaccuracies of the current models for the behavior of LLL and BKZ, and quantifies them experimentally. It should be noted that our measured statistics are barely enough to address the question of precise prediction. Many tweaks on those algorithms are typically applied (more aggressive pruning, more subtle progressive reductions, ...) to accelerate them and that would impact those statistics. On the other hand, the optimal parametrization of heuristic tweaks is very painful to reproduce, and not even clearly determined in the literature. We therefore find it preferable to first app- roach stable versions of those algorithm, and minimize the space of parameters.

  We would also not dare to simply guess extrapolation models for those statis- tics to larger blocksize: this should be the topic of a more theoretical study. Yet, by pointing out precisely the problematic phenomena, we set the ground for revised models and simulators: our reported statistics can be used to sanity check such future models and simulators. Source code. Our experiments heavily rely on the latest improvements of the open-source library fplll

  , catching up with the state of the art algorithm BKZ

  2.0. For convenience, we used the python wrapper fpylll

   ] for fplll, making our

  scripts reasonably concise and readable. All our scripts are open-source and

   , for reviewing, reproduction or extension purposes.

2 Preliminaries

  We refer to

  for an introduction to the behavior of LLL and BKZ.

  2.1 Notations and Basic Definitions All vectors are denoted by bold lower case letters and are to be read as row- vectors. Matrices are denoted by bold capital letters. We write a matrix B into

  n×m

  B = (b

  1 n ) where b i has full rank

  , · · · , b is the i-th row vector of B. If B ∈ R

  n n, the lattice L generated by the basis B is denoted by L(B) = {xB | x ∈ Z }. ∗ ∗

  We denote by (b ) the Gram-Schmidt orthogonalization of the matrix

  1 , · · · , b n

  (b

  1 n

  , · · · , b ). For i ∈ {1, · · · , n}, we define the orthogonal projection to the span

  ⊥

  of (b ) as π the local block

  1 , · · · , b i−1 i . For 1 ≤ i < j ≤ n, we denote by B [i,j]

  (π (b (b the lattice generated by B .

  i i ), · · · , π i j )), by L [i,j] [i,j]

  The Euclidean norm of a vector v is denoted by v . The volume of a lat-

  ∗

  tice L(B) is vol(L(B)) = b , that is an invariant of the lattice. The first

  i i

  minimum of a lattice L is the length of a shortest non-zero vector, denoted by λ

  (B) = λ 2 1 (L). We use the shorthands vol(B) = vol(L(B)) and λ

  1 1 (L(B)).

6 Y. Yu and L. Ducas

  Given a random variable X, we denote by E(X) its expectation and by Var(X) its variance. Also we denote by Cov(X, Y ) the covariance between two random variables X and Y . Let X = (X ) be a vector formed by random

  1 , · · · , X n variables, its covariance matrix is defined by Cov(X) = (Cov(X , X )) . i j i,j

  2.2 Lattice Reduction: In Theory and in Practice We now recall the definitions of LLL and BKZ reduction. A basis B is LLL-

  1

  , 1], if: reduced with parameter δ ∈ (

  2 1 ∗ ∗ ∗ i,j i,j i , b , b

  1. |µ | ≤ , 1 ≤ j < i ≤ n, where µ = b / b are the Gram-

  2 j j j

  Schmidt orthogonalization coefficients;

  ∗ ∗ ∗

  • µ i+1,i b 2. δ b i ≤ b i+1 i , for 1 ≤ i < n.

  1

  , 1], if: A basis B is BKZ-reduced with parameter β ≥ 2 and δ ∈ (

  2

  1

  1. |µ i,j | ≤ , 1 ≤ j < i ≤ n;

  2 ∗

  2. δ b ≤ λ 1 (L [i,min(i+β−1,n)] ), for 1 ≤ i < n.

  i

  Note that we follow the definition of BKZ reduction from

   ] which is a little

  different from the first notion proposed by Schnorr

   ]. We also recall that, as

  proven in

  , LLL is equivalent to BKZ . Typically, LLL and BKZ are used

  2

  √ with Lov´ asz parameter δ = 0.99 and so will we. For high dimensional lattices, running BKZ with a large blocksize is very expensive. Heuristics improvements were developed, and combined by Chen and

   In this paper, we report on pure BKZ

  behavior to avoid perturbations due to heuristic whenever possible. Yet we switch to BKZ 2.0 to reach larger blocksizes when deemed relevant.

  The two main improvements in BKZ 2.0 are called early-abort and pruned

  enumeration

  . As proven in

   ], the output basis of BKZ algorithm with blocksize 2 b n 2 i 1

  log n + log log max /n β would be of an enough good quality after C ·

  β vol(L)

  tours, where C is a small constant. In our experiments of BKZ 2.0, we chose dif- ferent C and observed its effect on the final basis. We also applied the pruning heuristic (see

  for details) to speed-up enumeration, but chose a con-

  servative success probability (95%) without re-randomization to avoid altering the quality of the output. The preprocessing-pruning strategies were optimized using the strategizer ] of fplll/fpylll.

  Given a basis B of an n-dimensional lattice L, we denote by rhf(B) the root 1 1/n b 1 Hermite factor of B, defined by rhf (B) = /n . The root Hermite vol(L) factor is a common measurement of the reducedness of a basis, e.g. ].

i 1≤i≤n−1 of an n-dimensional lattice basis

  Let us define the sequence {r (B)}

  ∗ ∗

  . The root Hermite factor B = (b

  1 n ) such that r i (B) = ln

  , · · · , b b i / b i+1 rhf (B) can be expressed in terms of the r i (B)’s: 3 Further improvements were recently put forward

   ], but are beyond the scope of Second Order Statistical Behavior of LLL and BKZ ⎛ ⎞

  7 ⎝ ⎠

  1 rhf (B) = exp i (B) . (1) (n − i)r

  2

  n

  1≤i≤n−1 i 1≤i≤n−1 characterizes how fast the sequence

  Intuitively, the sequence {r (B)}

  ∗ provides an implication between the fact that the

  { b } decreases. Thus Eq.

  i ∗

  b ’s don’t decrease too fast and the fact that the root Hermite factor is small.

  i

  For reduced bases, the r (B)’s are of certain theoretical upper bounds. However,

  i

  it is well known that experimentally, the r (B)’s tend to be much smaller than

  i the theoretical bounds in practice.

  From a practical perspective, we are more interested in the behavior of the r (B)’s for random lattices. The standard notion of random real lattices of given

  i

  volume is based on Haar measures of classical groups. As shown in

  , the uni-

  form distribution over integer lattices of volume V converges to the distribution of random lattices of unit volume, as V grows to infinity. In our experiments, we followed the sampling procedure of the lattice challenges

   ]: its volume is a ran-

  dom prime of bit-length 10n and its Hermite normal form (see

   ] for details) is

  sampled uniformly once its volume is determined. Also, we define a random LLL (resp. BKZ )-reduced basis as the basis outputted by LLL (resp. BKZ ) applied

  β β

  to a random lattice given by its Hermite normal form, as described above. To speed up convergence, following a simplified progressive strategy

  , we run

  BKZ (resp. BKZ 2.0) with blocksize β = 2, 4, 6, ... (resp. β = 2, 6, 10, ...) pro- gressively from the Hermite normal form of a lattice.

  We treat the r i (B)’s as random variables (under the randomness of the lattice

  i (β, n) the

  basis before reduction). For any i ∈ {1, · · · , n − 1}, we denote by r random variable r i (β, n) = r i (B), where B is a random BKZ β -reduced basis, and by D i (β, n) the distribution of r i (β, n). When β and n are clear from context, we simply write r for r (β, n).

  i i

  2.3 Heuristics on Lattice Reduction Gaussian Heuristic. The Gaussian Heuristic, denoted by GAUSS, says that, for “any reasonable” subset K of the span of the lattice L, the number of lattice points inside K is approximately vol(K)/vol(L). Let the volume of n-dimensional n/2

  π

  unit ball be V (1) = . A prediction derived from GAUSS is that λ

  n 1 (L) ≈ Γ(n/2+1)

  1/n −1/n

  (1) , which is accurate for random vol(L) · GH(n) where GH(n) = V n lattices. As suggested in

  , GAUSS is a valuable heuristic to estimate the cost and quality of various lattice algorithms.

  Random Local Block. In

   ], Chen and Nguyen suggested the following mod-

  eling assumption, seemingly accurate for large enough blocksizes: Assumption 1. [RAND

  • - n,β ] Let n, β ≥ 2 be integers. For a random BKZ

  β reduced basis of a random

  n-dimensional lattice, most local block lattices

  behave like a random L [i,i+β−1] β-dimensional lattice where i ∈ {1, · · · , n+1−β}.

  8 Y. Yu and L. Ducas

  By RAND and GAUSS, one can predict the root Hermite factor of local

  n,β β 1 blocks: rhf (B .

  [i,i+β−1] ) ≈ GH(β)

  Geometric Series Assumption. In

   ], Schnorr first proposed the Geometric

  Series Assumption, denoted by GSA, which says that, in typical reduced basis B,

  ∗

  looks like a geometric series (while GAUSS provides the sequence { b } 1≤i≤n

  i

  the exact value of this geometric ratio). GSA provides a simple description of Gram-Schmidt norms and then leads to some estimations of Hermite factor and enumeration complexity

   , GSA implies

  ]. When it comes to {r i (B)} 1≤i≤n−1 that the r (B)’s are supposed to be almost equal to each others. However, GSA is

  i ∗

  not so perfect, because the first and last b ’s usually violate it

   ]. The behavior i

  in the tail is well explained, and can be predicted and simulated .

  3 Head and Tail

  In

   ], it was already claimed that for a BKZ β -reduced basis B, GSA doesn’t

  hold in the first and last indices. We call this phenomenon “Head and Tail”, and provide detailed experiments. Our experiments confirm that GSA holds in a strong sense in the body of the basis (i.e. outside of the head and tail regions). Precisely, the distributions of r i ’s are similar in that region, not only their averages. We also confirm the violations of GSA in the head and the tail, quantify them, and exhibit that they are independent of the dimension n.

  As a conclusion, we shall see that the head and tail have only small impacts on the root Hermite factor when n ≫ β, but also that they can also be quantitatively handled when n ≫ β. We notice that the head has in fact a stronger impact than the tail, which emphasizes the importance of finding models or simulators that capture this phenomenon, unlike the current ones that only capture the tail .

  3.1 Experiments We ran BKZ on many random input lattices and report on the distribution of each r . We first plot the average and the variance of r for various blocksizes β

  i i and dimensions n in Fig. By superposing with proper alignment curves for the

  same β but various n, we notice that the head and tail behavior doesn’t depend on the dimension n, but only on the relative index i (resp. n − i) in the head (resp. the tail). A more formal statement will be provided in Claim

  We also note that inside the body (i.e. outside both the head and the tail) the mean and the variance of r i do not seem to depend on i, and are tempted to conclude that the distribution itself doesn’t depend on i. To give further evidence of this stronger claim, we ran the Kolmogorov-Smirnov test

  and confirm this stronger claim. Second Order Statistical Behavior of LLL and BKZ

  9 Fig. 1.

  Average value and standard deviation of r i as a function of i. Experimental

values measure over 5000 samples of random n-dimensional BKZ bases for n = 100, 140.

First halves {r i } i≤(n−1)/2 are left-aligned while last halves {r i } i>(n−1)/2 are right-

aligned so to highlight heads and tails. Dashed lines mark indices β and n − β. Plots

look similar in blocksize β = 6, 10, 20, 30 and in dimension n = 80, 100, 120, 140, which

are provided in the full version. 20 10 KS Test on D (2,100)’s KS Test on D (20,100)’s i i 20 10

  30 40 50 70 60 30 70 60 50 40 90

  80 10 20 30 40 50 60 70 80 90 90 80 10 20 30 40 50 60 70 80 90 Fig. 2. Kolmogorov-Smirnov Test with significance level 0.05 on all D (β, 100)’s calcu- i

lated from 5000 samples of random 100-dimensional BKZ bases with blocksize β = 2, 20

D

respectively. A black pixel at position (i, j) marks the fact that the pair of distributions

i (β, 100) and D j (β, 100) passed Kolmogorov-Smirnov Test, i.e. two distributions are

close. Plots in β = 10, 30 look similar to that in β = 20, which are provided in the

full version.

  3.2 Conclusion From the experiments above, we allow ourselves to the following conclusion. Experimental Claim 1. There exist two functions h, t : N → N, such that, for

  all

  n, β ∈ N, and when n ≥ h(β) + t(β) + 2:

  h

  1. When i (β, n) depends on i and β only: D i (β, n) = D (β)

  i ≤ h(β), D

  i b

  2. When i (β, n) depends on β only: D i (β, n) = D (β)

  h(β) < i < n − t(β), D

  t

  3. When i i (β, n) = D (β)

  i ≥ n − t(β), D (β, n) depends on n − i and β only: D n−i Remark 1.

  We only make this claim for basis that have been fully BKZ-reduced. Indeed, as we shall see later, we obtained experimental clues that this claim

10 Y. Yu and L. Ducas

  would not hold when the early-abort strategy is applied. More precisely, the head and tail phenomenon is getting stronger as we apply more tours (see Fig.

  .

  From now on, we may omit the index i when speaking of the distribution of r , implicitly implying that the only indices considered are such that h(β) < i <

  i

  n −t(β). The random variable r depends on blocksize β only, hence we introduce two functions of β, e(β) and v(β), to denote the expectation and variance of r

  (h) (t) respectively. Also, we denote by r (resp. r ) the r inside the head (resp. i i n−i

  (h) (h) (t) (t)

  tail), and by e (β) and v (β) (resp. e (β) and v (β)) the expectation

  i i n−i n−i (h) (t)

  and variance of r (resp. r ).

  i n−i

  We conclude by a statement on the impacts of the head and tail on the logarithmic average root Hermite factor: Corollary 1. For a fixed blocksize β, and as the dimension n grows, it holds

  that

  1 d(β)

  1 E(ln(rhf (B))) = e(β) + + O , (2)

  2

  2 n n

  (h)

  1 where d(β) = e h + e(β).

  (β) −

  i≤h i

  2 Corollary indicates that the impacts on the average root Hermite factor

  from the head and tail are decreasing. In particular, the tail has a very little effect

  1 2 O on the average root Hermite factor. The impact of the head d(β)/n, which n

  hasn’t been quantified in earlier work, is —perhaps surprisingly— asymptotically larger. We include the proof of Corollary

  

  Below, Figs.

   provide experimental measure of e(β) and d(β) from

  5000 random 100-dimensional BKZ β -reduced bases. We note that the lengths of the head and tail seem about the maximum of 15 and β. Thus we set h(β) = t(β) = max(15, β) simply, which affects the measure of e(β) and d(β) little. For the average e(2) ≈ 0.043 we recover the experimental root Hermite factor of LLL

   ].

  rhf (B) = exp(0.043/2) ≈ 1.022, compatible with many other experiments

  

  from 20 random 180-dimensional BKZ 2.0 bases with bounded tour number 2 ∗

β

b

  n i 2 1

  log n + log log max /n . It shows that the qualitative behavior C ·

  β vol(L)

  of BKZ 2.0 is different from full-BKZ not only the quantitative one: there is

  

  a bum in the curve of e(β) when β ∈ [22, 30]. Considering that the success probability for the SVP enumeration was set to 95%, the only viable explanation for this phenomenon in our BKZ 2.0 experiments is the early-abort strategy: the shape of the basis is not so close to the fix-point.

4 For BKZ 2.0, the distributions of the r i ’s inside the body may not be identical, thus

  5 we just calculate the mean of those r i ’s as a measure of e(β).

  

Yet the quality of the basis does not decrease with β in this range, as the bump on

e

  Second Order Statistical Behavior of LLL and BKZ

  11 Fig. 3. Fig. 4.

  Experimental measure of e(β) Experimental measure of d(β)

4 Local Correlations and Global Variance

  In the previous section, we have classified the r ’s and established a connection

  i

  between the average of the root Hermite factor and the function e(β). Now we are to report on the (co-)variance of the r ’s. Figure

   shows the experimental i

  measure of local variances, i.e. variances of the r ’s inside the body, but it is not

  i enough to deduce the global variance, i.e. the variance of the root Hermite factor.

  We still need to understand more statistics, namely the covariances among these r i ’s. Our experiments indicate that local correlations—i.e. correlations between r i and r i+1 —are negative and other correlations seem to be zero. Moreover, we confirm the tempting hypothesis that local correlations inside the body are all equal and independent of the dimension n.

  Based on these observations, we then express the variance of the logarithm of root Hermite factor for fixed β and increasing n asymptotically, and quantify the self-stability of LLL and BKZ algorithms.

  Fig. 5.

  Experimental measure of v(β)

12 Y. Yu and L. Ducas

  4.1 Experiments Let r = (r ) be the random vector formed by random variables r ’s.

  1 , · · · , r n−1 i

  We profile the covariance matrices Cov(r) for 100-dimensional lattices with BKZ reduction of different blocksizes in Fig.

  The diagonal elements in covariance matrix correspond to the variances of the r ’s which we have studied before. i

  Thus we set all diagonal elements to 0 to enhance contrast. We discover that the elements on the second diagonals, i.e. Cov(r , r )’s, are significantly nega-

  i i+1

  tive and other elements seems very close to 0. We call the Cov(r , r )’s local

  i i+1 covariances.

  

Fig. 6. Covariance matrices of r. Experimental values measure over 5000 samples of

random 100-dimensional BKZ bases with blocksize β = 2, 20. The pixel at coordinates

(i, j) corresponds to the covariance between r i and r j . Plots in β = 10, 30 look similar

to that in β = 20, which are provided in the full version.

  We then plot measured local covariances in Fig.

  Comparing these curves for

  various dimensions n, we notice that the head and tail parts almost coincide, and the local covariances inside the body seem to depend on β only, we will denote this value by c(β). We also plot the curves of the Cov(r i , r i+2 )’s in Fig.

   and note that the curves for the Cov(r , r )’s are horizontal with a value about 0. i i+2

  For other Cov(r , r )’s with larger d, the curves virtually overlap that for the

  i i+d

  Cov(r , r )’s. For readability, larger values of d are not plotted. One thing to be

  i i+2

  noted is that the case for blocksize β = 2 is an exception. On one hand, the head and tail of the local covariances in BKZ basis bend in the opposite directions,

  

2

  unlike for larger β. In particular, the Cov(r , r )’s in BKZ basis are not so

  i i+2

  2

  close to 0, but are nevertheless significantly smaller than the local covariances Cov(r , r ). That indicates some differences between LLL and BKZ.

  i i+1

  Also, we calculate the average of (n−2 max(15, β)) middle local covariances as an approximation of c(β) for different n and plot the evolution of c(β) in Fig.

  

  The curves for different dimensions seem to coincide, which provides another evidence to support that the local covariances inside the body don’t depend on n indeed. To determine the minimum of c(β), we ran a batch of BKZ with β = 2, 3, 4, 5, 6 separately. We note that c(β) increases with β except for c(3) < c(2), which is another difference between LLL and BKZ.

  Second Order Statistical Behavior of LLL and BKZ

  13 Fig. 7. Cov , r , r (r i i+1 ) and Cov(r i i+2 ) as a function of i. Experimental values mea-

sured over 5000 samples of random n-dimensional BKZ bases for n = 100, 140. The

, r , r

blue curves denote the Cov(r i i+1 )’s and the red curves denote the Cov(r i i+2 )’s.

For same dimension n, the markers in two curves are identical. First halves are left

, r , r

aligned while last halves {Cov(r i i+1 )} i>(n−2)/2 and {Cov(r i i+2 )} i>(n−3)/2 are

right aligned so to highlight heads and tails. Dashed lines mark indices β and n − β − 2.

Plots look similar in blocksize β = 6, 10, 20, 30 and in dimension n = 80, 100, 120, 140,

which are provided in the full version.

  

Remark 2. To obtain a precise measure of covariances, we need enough samples

  and thus the extended experimental measure of c(β) is not given. Nevertheless, it seems that, after certain number of tours, local covariances of BKZ 2.0 bases still tend to be negative but other covariances tend to zero.

  Fig. 8. Fig. 9.

  Experimental measure of the Experimental measure of v(β)+2c(β)

evolution of c(β) calculated from 5000 . The data point for β = 2,

v(2)+2c(2) 3 samples of random BKZ bases in dif- 3 ≈ 0.00045 was clipped out, ferent dimension n respectively. being 10 times larger than all other values.

  4.2 Conclusion From above experimental observations, we now arrive at the following conclusion. Experimental Claim 2. Let h and t be the two functions defined in Claim

  14 Y. Yu and L. Ducas

  1. When and r are not correlated: Cov(r , r ) = 0

  |i − j| > 1, r i j i j

  2. When and are negatively correlated:

  r Cov(r , r ) < 0. More |i − j| = 1, r i j i j

  specifically:

  • – When

  , r ) depends on i and β only: Cov(r , r ) = i ≤ h(β), Cov(r i i+1 i i+1

  h

  c (β)

  i

  • – When

  , r ) depends on β only: h(β) < i < n − t(β), Cov(r i i+1 Cov(r , r ) = c(β)

  i i+1

  • – When