Radio Frequency Identification pdf pdf

LNCS 9440

Stefan Mangard
Patrick Schaumont (Eds.)

Radio Frequency
Identification
Security and Privacy Issues
11th International Workshop, RFIDsec 2015
New York, NY, USA, June 23–24, 2015
Revised Selected Papers

123

Lecture Notes in Computer Science
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison

Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zürich, Zürich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar

University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany

9440

More information about this series at http://www.springer.com/series/7410

Stefan Mangard · Patrick Schaumont (Eds.)

Radio Frequency
Identification
Security and Privacy Issues
11th International Workshop, RFIDsec 2015
New York, NY, USA, June 23–24, 2015
Revised Selected Papers

ABC

Editors

Stefan Mangard
Graz University of Technology
Graz
Austria

ISSN 0302-9743
Lecture Notes in Computer Science
ISBN 978-3-319-24836-3
DOI 10.1007/978-3-319-24837-0

Patrick Schaumont
Blacksburg
Virginia
USA

ISSN 1611-3349

(electronic)

ISBN 978-3-319-24837-0


(eBook)

Library of Congress Control Number: 2015949479
LNCS Sublibrary: SL4 – Security and Cryptology
Springer Cham Heidelberg New York Dordrecht London
c Springer International Publishing Switzerland 2015

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage
and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known
or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, express or implied, with respect to the material contained herein or for any errors or
omissions that may have been made.
Printed on acid-free paper

Springer International Publishing AG Switzerland is part of Springer Science+Business Media
(www.springer.com)

Preface

Welcome to the 11th International Workshop on RFID Security (RFIDsec), held
at the NYIT Auditorium on Broadway in New York City, NY, USA, during June
22–24, 2015. RFIDsec has been the main venue for new results in RFID system
and implementation security for over a decade. The event has travelled to many
different places all over the world. Driven by the positive experience of 2014, we
co-located RFIDsec for the second time with WiSec, and we created a tightly
integrated program that allowed attendees to hop from one event to the other.
RFIDsec 2015 assembled four sessions with exciting results in RFID security. The four sessions collect ten regular papers, which were selected by the
Program Committee after a rigorous review process out of 23 submissions. The
review procedure included an individual review phase followed by a collective
online discussion by the 22 members of the Technical Program Committee with
the program chairs. The Program Committee members were supported by 30
external reviewers.
Besides the ten accepted papers, the workshop also included a shared keynote
with WiSec, an RFIDsec keynote talk and two tutorials. The shared keynote

talk was given by Srdjan Capkun from ETH Zurich. In his talk “Why We
Should Build a Secure Positioning Infrastructure,” Dr. Capkun discussed the
main challenges in designing and building new positioning infrastructures that
offer security and privacy by design. The keynote talk of RFIDsec 2015, “Hardware Trojans for ASICs and FPGAs,” was given by Christof Paar from Ruhr
University Bochum and University of Massachusetts Amherst. Hardware Trojans
is a topic of rapidly increasing importance in modern complex digital eletronics, especially for those that have a trustworthy function. Dr. Paar shared his
insights and latest research results into this problem. The first tutorial was on
Contactless Payments, and it was given by Joeri de Ruiter from the University of Birmingham. The contactless environment comes with very challenging
problems in power provisioning and communications. Dr. de Ruiter explained
the unique solutions that are enabled by sound cryptographic engineering. The
second tutorial was on Anonymous Based Credentials (ABCs) in Theory and
Practice, and it was given by Gergely Alpar from Radboud University. As explained by Dr. Alpar, ABCs handle the important issue of user authentication
and authorization while at the same time ensuring the user’s privacy.
The program chairs would like to thank the general chairs, Paolo Gasti and
Ramesh Karri, for their support in hosting RFIDsec 2015 in the Big Apple. We
are also greatly indebted to the 22 members of the Technical Program Committee, who provided valuable technical insights in the assembly of the program.
Finally, we would like to thank the RFIDsec Steering Committee members for

VI


Preface

their guidance in setting up the 11th edition of this exciting workshop series,
and for opening the path to the next decade of RFIDsec.
July 2015

Stefan Mangard
Patrick Schaumont

Organization

Program Committee
Frederik Armknecht
Gildas Avoine
Lejla Batina
Srdjan Capkun
Rajat Subhra Chakraborty
Thomas Eisenbarth
Martin Feldhofer
Aur´elien Francillon

Gerhard Hancke
Julio Hernandez
Daniel E. Holcomb
Michael Hutter
Stefan Mangard
Daisuke Moriyama
Christof Paar
Axel Poschmann
Bart Preneel
Matt Robshaw
Kazuo Sakiyama
Nitesh Saxena
Patrick Schaumont
Erich Wenger
Avishai Wool

Universit¨at Mannheim, Germany
INSA Rennes, France and UCL, Belgium
Radboud University Nijmegen,
The Netherlands

ETH Zurich, Switzerland
IIT Kharagpur, India
Worcester Polytechnic Institute, USA
NXP Semiconductors, Germany
Eurecom, France
City University of Hong Kong, SAR China
Kent University, UK
University of Michigan, USA
Cryptography Research Inc., USA
TU Graz, Austria
NICT, Japan
Ruhr University Bochum, Germany and
University of Massachusetts, USA
NXP Semiconductors, Germany
KU Leuven, Belgium
Impinj, USA
The University of Electro-Communications,
Japan
University of Alabama at Birmingham, USA
Virginia Tech, USA

TU Graz, Austria
Tel Aviv University, Israel

Additional Reviewers
Becker, Georg T.
Bilgin, Beg¨
ul
Boehl, Florian
Budhathoki, Parshuram
Chen, Cong
Chmielewski, Lukasz
Delvaux, Jeroen
Forte, Domenic
Gross, Hannes

Herbst, Christoph
Hermans, Jens
Hiller, Matthias
Joye, Marc
Komano, Yuichi

Korak, Thomas
Krasnova, Anna
Lamberger, Mario
Li, Yang

VIII

Organization

Mohamed, Manar
Mukhopadhyay, Dibya
Nikova, Svetla
Phuong Ha, Nguyen
Saha, Sayandeep
Sahoo, Durga Prasad

Schl¨osser, Alexander
Shirvanian, Maiheh
Shrestha, Babins
Taha, Mostafa
Wang, Chao
Yamamoto, Dai

Contents

PUFs and Applications
Security Evaluation and Enhancement of Bistable Ring PUFs . . . . . . . . .
Xiaolin Xu, Ulrich R¨
uhrmair, Daniel E. Holcomb,
and Wayne Burleson

3

On the Scaling of Machine Learning Attacks on PUFs with Application
to Noise Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Johannes Tobisch and Georg T. Becker

17

ReSC: RFID-Enabled Supply Chain Management and Traceability for
Network Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kun Yang, Domenic Forte, and Mark Tehranipoor

32

Side-Channels and Countermeasures
Side-Channel Assisted Modeling Attacks on Feed-Forward Arbiter
PUFs Using Silicon Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Raghavan Kumar and Wayne Burleson

53

Sharing is Caring—On the Protection of Arithmetic Logic Units against
Passive Physical Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hannes Gross

68

RFID System Attacks
Practical Experiences on NFC Relay Attacks with Android:
Virtual Pickpocketing Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Jos´e Vila and Ricardo J. Rodr´ıguez

87

Algebraic Cryptanalysis and RFID Authentication . . . . . . . . . . . . . . . . . . .
Carlos Cid, Loic Ferreira, Gordon Procter, and Matt J.B. Robshaw

104

An RFID Skimming Gate Using Higher Harmonics . . . . . . . . . . . . . . . . . . .
Ren´e Habraken, Peter Dolron, Erik Poll, and Joeri de Ruiter

122

Efficient Implementations
Efficient E-cash with Attributes on MULTOS Smartcards . . . . . . . . . . . . .
Gesine Hinterw¨
alder, Felix Riek, and Christof Paar

141

X

Contents

Efficient and Secure Delegation of Group Exponentiation to a Single
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bren Cavallo, Giovanni Di Crescenzo, Delaram Kahrobaei,
and Vladimir Shpilrain
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

156

175

Part I

PUFs and Applications

Security Evaluation and Enhancement
of Bistable Ring PUFs
Xiaolin Xu1 , Ulrich R¨
uhrmair2 , Daniel E. Holcomb1 , and Wayne Burleson1
1

2

ECE Department, University of Massachusetts Amherst
{xiaolinx,dholcomb,burleson}@umass.edu
Horst G¨
ortz Institute for IT-Security, Ruhr Universit¨
at Bochum
ruehrmair@ilo.de

Abstract. The Bistable Ring (BR) Physical Unclonable Function (PUF)
is a newly proposed hardware security primitive in the PUF family. In
this work, we comprehensively evaluate its resilience against Machine
Learning (ML) modeling attacks. Based on the success of ML attacks,
we propose XOR strategies to enhance the security of BR PUFs. Our results show that the XOR BR PUF with more than four parallel BR PUFs
is able to resist the ML modeling methods in this work. We also evaluate
the other PUF metrics of reliability, uniqueness and uniformity, and find
that the XOR function is also effective in improving the uniformity of
BR PUFs.

1

Introduction

In the last ten years, physical unclonable functions (PUFs) have established
themselves as an alternative to conventional security approaches [4][11]. In a
nutshell, a PUF is a disordered, at least partly randomly structured physical
system. Due to its random structure that is caused by uncontrollable, smallscale manufacturing variations, it is physically unclonable, i.e., no two specimens
can be produced that are physically exactly identical. This limitation applies to
both the original manufacturer and to other, potentially adversarial, parties. All
PUFs have one basic functionality in common, namely some challenge-response
mechanism: They can be triggered or excited by signals that are commonly
denoted as challenges Ci . Upon excitation by a challenge Ci , they react by
producing a response RCi that depends on their internal disorder and usually also
on the challenge itself. The tuples (Ci , R(Ci )) are called the challenge-response
pairs (CRPs) of the PUF.
Over the years, different variants or types of PUFs have emerged (see [12] for
an overview). They all share the above features of being a disordered structure,
possessing physical unclonability, and exhibiting some form of challenge-response
mechanism. However, their other security features, together with their intended
applications and associated attack scenarios, notably differ. This makes it useful
in scientific works to explicitly distinguish these types from each other [12].
c Springer International Publishing Switzerland 2015

S. Mangard, P. Schaumont (Eds.): RFIDsec 2015, LNCS 9440, pp. 3–16, 2015.
DOI: 10.1007/978-3-319-24837-0_1

4

X. Xu et al.

The two main PUF-types are often denoted as weak and strong PUFs. Weak
PUFs possess essentially a single, fixed challenge C, as for example in the case
of SRAM PUFs. They are mainly used for internal key derivation in security
hardware. The underlying security assumption is that attackers must not be
able to access the internal response of the PUF, for example by reading out the
power-up state of the SRAM PUF [5][6]. In opposition to that, strong PUFs are
PUFs that have a very large number of possible challenges, too many to read
out all corresponding CRPs in feasible time. Their challenge-response mechanism
should be complex in the sense that it is hard to derive unknown CRPs from
a set of known CRPs. In particular, strong PUFs should not allow “modeling
attacks”, in which an adversary collects a subset of CRPs, uses them to train a
machine learning (ML) algorithm, and later employs the model produced by the
ML algorithm to predict unknown CRPs.
Strong PUFs are usually employed with a publicly accessible CRP interface,
i.e., anyone holding the PUF or the PUF embedding hardware can apply challenges and read out responses. The lack of access restriction mechanisms on
strong PUFs is therefore a key difference from weak PUFs. In recent years, strong
PUFs have turned out to be a very versatile cryptographic and security primitive:
First of all, by using a fixed set of challenges, they can be employed for internal
key derivation, just like weak PUFs. But they can do more: They can also implement a host of advanced cryptographic protocols, ranging from identification
[11][9] to key exchange [20][1] to oblivious transfer [1].
Their many possible applications make the secure construction of secure strong
PUFs a worthwhile and rewarding research target. Unfortunately, it is a nontrivial one, too: A large number of powerful attacks on some first-generation
electrical strong PUFs have been published recently. They include the above
mentioned modeling attacks [13][14]; side channel attacks [15]; and also optical
characterization techniques [19]. Most of these attacks target the first electrical
strong PUF, the so-called Arbiter PUF [4][11] and variants thereof, for example
XOR Arbiter PUFs and Feed-Forward Arbiter PUFs. For this reason, alternative
silicon architectures have been proposed in recent years. One such alternative is
the ”Bistable Ring PUF” (BR PUF) [2][3], which was designed to have a more
complex internal response-generating mechanism in hopes of making ML attacks
harder.
At TRUST 2014, Hesselbarth and Schuster [16] succeeded in revealing some
basic vulnerabilities of the BR PUF against ML techniques. They proved that
BR PUFs can be attacked by a single layer artificial neural network (ANN) with
prediction errors between close to 0% and 20%, varying from hardware instance
to instance. Among the 20 FPGA instances examined, 14 could be predicted
with errors less than 10%. This puts close limits on the security usability of the
BR PUF on FPGAs. Schuster and Hesselbarth subsequently proposed a small
design improvement, so-called twisted BR PUFs (TBR PUFs), which they conjectured to possess better security. Using their own ANN algorithm, they were
also able to attack TBR PUFs again. However, the TBR PUF shows average
higher prediction errors with respect to ANNs, indicating that TNR PUFs has

Security Evaluation and Enhancement of Bistable Ring PUFs

5

some improvements over plain BR PUFs. It remained open in the work of Schuster and Hesselbarth whether said improvement is sufficient for secure practical
usage of the TBR PUF.
Our Contributions. In this paper, we re-examine the security of the BR PUF
and TBR PUF closely, again using FPGA implementations. We thereby make
the following new contributions:
– We implement 8 instances of the BR PUF and the TBR PUF on FPGA. To
achieve a more comprehensive ML analysis, we implement bitlengths other
and larger than 64, namely also 32, 128 and 256. These bitlengths had never
before been implemented in silicon and studied in the literature.
– We develop the first analytical models for the BR PUF and the TBR PUF.
– We use these new models in order to apply, for the first time, support vector
machines (SVMs) to the BR PUF and the TBR PUF. This more powerful
ML-tool drastically improves the ML predication rates relative to previous
work. None of our 8 instances has a prediction error exceeding 5%. This
result answers the open question of Hesselbarth and Schuster whether certain
individual and particularly hard instances of the BR PUF or TBR PUF could
be used securely in practice: In our findings, this was not the case.
– We then propose a new, efficient strategy for the secure practical use of the
BR PUF: namely the employment of l instances in parallel, whose l outputs
are XORed at the end in order to produce one single-bit PUF-response.
We call the resulting structure XOR BR PUF. We show that even for small
values of l such as 4, this structure cannot be machine learned by our current
techniques, while it is still sufficiently stable in practice. This work is the first
study of XOR BR PUFs in the literature.
Organization of This Paper. This paper is organized as follows. Section 2 discusses our attacks on the BR PUF, while Section 3 details our attacks on the
TBR PUF. Section 4 suggests the use of XOR BR PUFs and evaluates their
performance improvement. Section 5 concludes the paper.

2
2.1

SVM Attack on BR PUFs
Mechanism of BR PUF

A ring oscillator (RO) is a device composed of an odd number of logicallyinverting delay elements. Since the output of the last element is always the
logical “NOT” of the first input, an RO will continually oscillate. Derived from
the non-settling structure of RO, BR PUF is a ring comprising an even number
of inverting cells. Such a design behaves like a memory cell and will fall into one
of two possible stable states: either “101010...” or “010101...”.
As depicted in Fig. 1, a 64-bit BR PUF is composed of 64 stages, where each
stage has two inverting delay elements (NOR gates as an example). A challenge
vector C = {c1 , c2 , . . . , cn } selects the NOR gates used in each bistable ring

6

X. Xu et al.

configuration by providing values to the MUX and DEMUX gates of the stages.
Since each NOR gate has unique process variation, each different challenge vector
creates a unique bistable ring configuration, and in total 264 different configurations can be created. A common “RESET” signal is added to each stage to
establish a known “all-0” state before letting the ring stabilize to produce its
response. Evaluation of the response begins when “RESET” is released and the
ring starts to oscillate through the selected NOR gates. Once a stable state is
reached, the outputs of any two adjacent stages will be logical compliments of
each other, either “10” or “01”. The choice among the two possible stable states
of the ring depends on noise and the process variation of the NOR gates used
in the ring configuration. Any interconnection node between two stages can be
used as an output port, and in this work we use the half bit-length port to read
out the response (Fig. 1).

Fig. 1. Schematic of a single BR-PUF with 64 stages

2.2

Intuition for Modeling BR PUF

The intuition for our modeling attack is that the response can predicted based on
a summation of weights. Such an additive model is commonly used for predicting the responses of Arbiter PUFs, where the weights represent stage delays [8].
An additive model has also been used for predicting the resolution of metastability [7], with weights representing the strength with which different cells pull
toward a particular outcome. We similarly use an additive model in this work; the
weight we associate with each gate represents the difference between its pull-up
strength and pull-down strength. The weights are summed across all gates used
by a challenge to find the overall favored response for that challenge; a positive
sum indicates a preference for the positive response. Note that the summation
of weights requires negative and positive polarities because the positive overall
response is favored by the pull-up strength of even stages and the pull-down
strength of odd stages.

Security Evaluation and Enhancement of Bistable Ring PUFs

2.3

7

Model

100

90

90

80
32−bit
70
60

prediction rate (%)

50
0

170

340
510
training size

680

prediction rate (%)

100

80
64−bit
70
60
50
0

850

100

100

90

90

80
128−bit
70
60
50
0

500

1000 1500
training size

2000

2500

prediction rate (%)

prediction rate (%)

Let the difference between the pull-up and pull-down strength of the top NOR
gate in the ith stage be represented by ti , and in the bottom NOR gate in the ith
stage be represented by bi . The even stages will contribute toward the positive
response with strength ti (or bi if the challenge bit selects the bottom NOR gate
of the stage), and the odd stages will contribute toward the positive response
with strength −ti (or −bi ). To account more generally for even-ness or odd-ness,
the strength of the ith stage toward the positive response can be written as
−1i ti if the challenge bit is 0, and −1i bi if the challenge bit is 1. For a given
64-bit challenge, the total strength pulling toward the positive response is the
summation of 64 ti and bi weights.

300

600
900
training size

1200

1500

80
256−bit
70
60
50
0

1200

2400 3600
training size

4800

6000

Fig. 2. Prediction rate of SVM modeling attacks on BR PUFs. When the length of
the BR PUF increases, more CRPs are required to train the model to achieve 95%
prediction. Note that the scale of the x-axes are not consistent across the subfigures.

For convenience we define αi and βi (Eq. 1) such that αi + βi = −1i ti and
αi − βi = −1i bi . This notation allows the pull of the ith stage toward the positive
response to be written generally as αi + ci βi with challenge bit ci ∈ {−1, 1}. The
summed strengths toward the positive response for any challenge vector C is

8

X. Xu et al.

r(C) as shown in Eq. 2. According to our formulation, if the ti and bi weights
were known explicitly, then the response could be predicted by the sign of R(C)
(Eq. 2).




ti + b i
ti − b i
i
i
αi = −1
βi = −1
(1)
2
2

αi + ci βi )
(2)
R(C) = sgn(
i=0..n−1

Given that weights are not known, since there are only two possible responses
of BR PUFs, based on the model above, we can convert the response prediction
of BR PUFs into a classification problem. Support Vector Machines (SVM) are
powerful learning tools that can perform binary classification of data, the classification is realized with building a hyperplane separating surface. While digesting
the known input and output data sets, the hyperplane separating surface will be
curved to minimize the error of predicted values.
Known CRPs are used to train the classifier to predict responses from challenges. In the SVM formulation, first note that the αi terms in Eq. 2 can be
discarded because they are constant for a given PUF instance across all challenges. Only ci βi terms remain, and from these terms the response must be
predicted. Given a set of challenges and associated responses, the training examples therefore have as their feature vector an applied challenge Cj ∈ {−1, 1}n
and as their labels the observed response R(Cj ) ∈ {−1, 1} to challenge Cj . Note
that βi terms do not appear explicitly in the SVM formulation as the classifier
simply works to find the maximum margin hyperplane to separate the challenges
into two classes according to their responses.
2.4

SVM Attacks on BR PUFs

To explore the effectiveness of SVM attacks, we implemented on a Xilinx SpartanVI FPGA board, 8 BR PUFs with lengths of 32-, 64-, 128- and 256 bits, and
collected 1,000,000 CRPs from each of them (to decrease the impact of measurement noise, all of the final CRPs are formulated by majority voting from
11 repeated measurements). SVM attacks are implemented with a linear kernel
to mimic the operation of single BR PUFs (note that to attack XOR BR PUFs,
SVM model with a polynomial kernel is utilized, where the poly-order of the
model is set as the XORing complexity of BR PUFs). The results of SVM attacks are shown as Fig. 2. To demonstrate the relationship between prediction
rate and CRPs used for different PUF lengths, we utilize 95% as a threshold prediction rate. It is clear that while the size of BR PUF is increasing, the demand
for CRPs is also increasing to build its ML model. However, for any tested size
of BR PUF, the SVM modeling attack is successful in predicting responses. This
means a single BR PUF is not secure, even if it has a large number of stages.

Security Evaluation and Enhancement of Bistable Ring PUFs

3
3.1

9

Twisted BR PUFs Attack
Model of TBR PUFs

Uniformity, or fractional Hamming weight, is an important feature of PUFs. A
good PUF that produces an equal number of 0 and 1 responses will have a
uniformity of around 0.50. However, the uniformity of CRPs of BR PUF implementations has been found to be biased in previous work [16] (see also Sec. 4.3
in this work). To compensate for this drawback, TBR-PUF was proposed in [16].
Compared to the BR PUF, the TBR-PUF has a more compact design; when
applying a challenge vector to the TBR PUF, all of its 2n inverting elements are
used in the ring. By contrast, in the standard BR PUF, half of the NOR gates
in the circuit are unused for any given challenge. Taking the TBR PUF in Fig.
3 as an example, using challenge bit c0 = 1 or c0 = 0 will change the location
of D00 and D10 in the ring, but in either case D00 and D10 will both contribute in
some polarity to the response.

Fig. 3. Schematic of a single TBR-PUF with 64 stages

From Sec. 2, we know that a ring composed of an even number of inverting
elements will stabilize according to the summed strength of the pull-up and
pull-down strengths of each gate. The TBR PUF uses pull-up and pull-down
strengths of all inverting components in the circuit, but only the polarity (i.e.
even-ness or odd-ness) of each element toward the overall ring response changes
with the challenge vector. According to the interconnections of the 64-bit TBR
PUF, the two NOR gates in the ith stage are the ith and 127 − ith element in the
overall ring. Because one element is odd in the overall ring, and one is even, the
pull-up strength of the top and bottom gates in each stage are working against
each other. Therefore, the overall contribution toward the positive response is
βi (Eq. 3), or −βi if the ith challenge bit is negative. The overall sum of weights
pulling toward the positive response for challenge C is therefore R(C) (Eq. 4).
Eq. 2 and Eq. 4 differ only in
 in the physical meaning of βi , and in Eq. 2 having
an additional offset term of i αi , but in terms of ML modeling they are actually
the same identical model. Therefore, the complexity of ML attacks on the TBR
PUF is the same as the complexity of attacking the BR PUF.
βi = −1i (ti − bi )

(3)

10

X. Xu et al.

R(C) = sgn(



c i βi )

(4)

i=0...n−1

90

90

80
32−bit
70
60
50
0

90

180
270
training size

360

prediction rate (%)

100

prediction rate (%)

100

80
64−bit
70
60
50
0

450

90

90

80
128−bit
70
60
50
0

300

600
900
training size

1200

1500

prediction rate (%)

100

prediction rate (%)

100

150

300
450
training size

600

750

80
256−bit
70
60
50
0

600

1200 1800
training size

2400

3000

Fig. 4. Prediction rate of SVM modeling attacks on TBR PUFs of different bit lengths.
As in Fig. 2, to achieve same prediction rate, a larger PUF requires more CRPs.

3.2

SVM Attacks on TBR PUFs

Given that we have shown the model of a TBR PUF to be the same as that
of a BR PUF, we can again train an SVM classifier to predict its responses
to each challenge. Eight TBR PUFs are implemented with Spartan-VI FPGA
boards, and 1,000,000 CRPs are collected from each of them. For each CRP,
majority voting over 11 repeated measurements of the response to a challenge
are performed in order to reduce the impact of noise.
Following the experiment in Sec. 2.4, SVM attacks with polynomial kernel
are applied on TBR PUFs of 32-, 64-, 128- and 256 bit-length (the poly-order of
the model is set as the XORing complexity). The results in Fig. 4 show that the
modeling attacks succeed in modeling all different sizes of the TBR PUF, with
prediction rate no lower than 95%.

Security Evaluation and Enhancement of Bistable Ring PUFs

4

11

XORing BR PUFs to Enhance the Security

It is possible using ML to model the behavior of a single strong PUF like the
Arbiter PUF [8]. To thwart modeling attacks, an XOR function was proposed
as a way to enhance security of Arbiter PUFs [17] and lightweight PUFs [10]. In
an XOR PUF, the same challenge vector is applied to l single PUFs in parallel,
and their outputs are XORed together to form a one-bit response. XORing is
an efficient method to enhance the security of strong PUFs, because the XOR
function obfuscates the CRPs of the individual PUFs [17]. Inspired by this idea,
we propose to use XOR strategies on BR PUFs to improve their resistance to
modeling attacks.
4.1

Review of Existing Attacks on XOR PUFs

The addition of XOR functions increases the resistance of strong PUF against
modeling attacks. Both the training time and number of CRPs required to train
a model increase exponentially with the number of XORed PUFs [13]. Attacking
XOR-based Arbiter PUFs with more than five parallel Arbiter PUFs was stated
as difficult based on pure ML modeling [14]. Later works devised a more powerful class of hybrid attacks that combine side channels with ML [15,21]. Power
and timing side-channels allow information about the sub-responses (i.e. the responses of single PUFs before the final XOR) of XORed PUFs to be extracted
and used to improve the prediction rate of ML models. In light of these hybrid
attacks, if the side-channel information of BR PUFs can also be measured, then
the use of XOR will not be an effective way to enhance the security.
4.2

SVM Modeling Attacks on XORed BR PUF

Adopting the model of single BR PUF in Sec. 2, for an XOR BR PUF employing
l BR PUFs, the XORed response to a challenge C can be described by Eq. 5.
Note the similarity between this formula and the formula of the single BR PUF
(Eq. 2). The only modification is that now each stage has l different α and β
terms, one for each of the PUFs. The overall response is based on how many of
the individual PUFs have a positive response.

R(C) = sgn


 n−1

 l−1
αi,j + ci βi,j )
(

(5)

j=0 i=0

In applying SVM to XOR BR PUF, it is found that we can only break the
complexity up to 2 XOR for 128-bit length and 3 XOR for 64-bit length. The
number of CRPs and runtime1 for SVM modeling attacks against XOR BR PUFs
are listed in Tab. 1. We can surmise that XOR BR PUFs with 4 or more XORed
outputs are beyond the reach of current SVM modeling attacks.
1

The computer used has a common Intel 3630QM quadcore processor.

12

X. Xu et al.

Table 1. The run times and number of CRPs that are required for SVM attacks on
the XOR BR PUFs of different sizes. Prediction rates around 50% imply that the SVM
model can not break XOR BR PUFs of these complexity. *Note that the training time
is greatly determined by the computational systems.
No. of
XORs
2

3

4

4.3

Bit
Length
32
64
128
256
32
64
128
256
32
64
128
256

CRPs
(×103 )
0.8
4
18
—–
1.2
7.2
—–
—–
—–
—–
—–
—–

Predict.
Rate
95%
95%
95%
50.8%
95%
95%
50.1%
50.1%
50.1%
50.3%
49.8%
50.1%

Training*
Time
3 sec
10 sec
6 mins
—–
5 sec
24 sec
—–
—–
—–
—–
—–
—–

Performance Evaluation of XORed BR PUF

While the basic motivation of XORing BR PUF is to resist modeling attacks,
the impact of the XOR on other key metrics must also be considered. In this
section, we evaluate the impact of the XOR function on reliability, uniqueness,
and uniformity.
Reliability. Reliability is the ratio of consistent CRPs when a PUF is operating in different environment conditions such as temperature. To evaluate the
reliability of XOR BR PUFs, 8 BR PUFs are measured across different temperatures between 27◦ C and 75◦ C, with a 4◦ C step, using a Sun Electronics EC12
Environmental Chamber [18] to control the temperature (Fig. 5a). Reliability
is evaluated by comparing CRPs collected at 27◦ C to CRPs collected at other
temperatures. For a XOR PUF, any unstable sub-response can cause the XORed
response to be unreliable. Therefore, the reliability at any temperature will decrease with the number of PUFs that are XORed together (Fig. 5b). According
to the first BR PUF paper [3], an effective solution to solve this problem is employing CRPs that settle down quickly, since those CRPs are less sensitive to
noise.
Uniqueness. Uniqueness is the capability of a PUF to distinguish itself from
other instances. Uniqueness is quantified as the fraction of responses that are

Security Evaluation and Enhancement of Bistable Ring PUFs

13

100
XOR=1
XOR=2
XOR=3
XOR=4
XOR=5
XOR=6
XOR=7
XOR=8

reliability (%)

95

90

85

80
20

(a) experimental platform

40
60
temperature

80

(b) reliability across different temperatures

Fig. 5. Evaluating reliability across different temperatures. Because the reliability of
each single BR PUF decreases with temperature, the reliability of the XOR BR PUF
results degrade significantly.

0.1

0

0.1

0
0

0.5
XOR=2

1

0.5
XOR=3

1

0

0

0.1

0
0

0.5
XOR=5

1

0.5
XOR=4

1

0.2
frequency

frequency

0.2

0.1

0.1

0
0

0.2
frequency

0.2
frequency

0.2
frequency

frequency

0.2

within
between
0.1

0
0

0.5
XOR=6

1

0

0.5
XOR=7

1

Fig. 6. The between-class and within-class Hamming distance of XOR PUFs. Even
when XORing together more BR PUFs, the within-class and between-class Hamming
distances can still be differentiated. The results are based on 8 BR PUFs, thus there
is only one 8 XOR BR PUF and no uniqueness is formulated for it.

14

X. Xu et al.

different across instances when the same challenges are applied. Thus for m PUF
instances, a total of m∗(m−1)
uniqueness values are obtained. To better explore
2
the uniqueness of XOR BR PUF, we compute its within-class (response flipping by noise, temperature noise here) and between-class uniqueness (response
difference between instances), these results are depicted in Fig. 6.
Uniformity. Uniformity denotes the average response of a PUF, the ideal value
of which is 0.5, meaning equal amount of “1” and “0” responses. Uniformity that
is far away from 0.5 will have less response entropy and be easier to attack with
modeling [22]. In our experiment, the uniformity of a single BR PUF is found
to be highly biased, and this phenomenon was also reported in [16] [22]. The
XOR function helps to remove this bias. To validate the uniformity improvement
from the XOR function, we collected the CRPs from eight 64-bit BR PUFs from
FPGA (without CRP majority voting). It is found that some PUF instances show
extreme bias, but XORing more single BR PUFs together decreases response bias
(Fig. 7).
0.9
0.8

uniformity

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

1

2

3

4
5
XOR size

6

7

8

9

Fig. 7. The response uniformity of a single BR PUF (represented by “XOR=1” in plot)
is highly biased. When more BR PUFs are XORed together, the uniformity is closer
to 0.5.

5

Conclusion and Future Works

In this work, we studied two relatively new PUF variants: BR PUF and its
derived architecture TBR PUF. Their resilience against ML modeling attacks
is explored and it is shown that their response can be predicted with success
rate exceeding 95% using reasonable runtime and less than 10k CRPs in all
cases. Our work confirms that neither a single BR, nor TBR, PUF is secure.

Security Evaluation and Enhancement of Bistable Ring PUFs

15

To strengthen the BR PUF against modeling attacks, we proposed and evaluated
an XOR BR PUF variant. It is found that XORing 4 or more BR PUFs together
produces a behavior that is beyond the modeling capability of current SVM
ML attacks, and also improves other key PUF metrics like uniformity. Future
work will explore the effectiveness of other modeling attacks, like Evolutionary
Strategy and Logistic Regression methods.

References
1. Brzuska, C., Fischlin, M., Schr¨
oder, H., Katzenbeisser, S.: Physically uncloneable functions in the universal composition framework. In: Rogaway, P. (ed.)
CRYPTO 2011. LNCS, vol. 6841, pp. 51–70. Springer, Heidelberg (2011)
2. Chen, Q., Csaba, G., Lugli, P., Schlichtmann, U., Ruhrmair, U.: The bistable
ring puf: A new architecture for strong physical unclonable functions. In: 2011
IEEE International Symposium on Hardware-Oriented Security and Trust (HOST),
pp. 134–141. IEEE (2011)
3. Chen, Q., Csaba, G., Lugli, P., Schlichtmann, U., Ruhrmair, U.: Characterization
of the bistable ring puf. In: 2012 Design, Automation & Test in Europe Conference
& Exhibition (DATE), pp. 1459–1462. IEEE (2012)
4. Gassend, B., Clarke, D., Van Dijk, M., Devadas, S.: Silicon physical random functions. In: Proceedings of the 9th ACM Conference on Computer and Communications Security, pp. 148–160. ACM (2002)
5. Guajardo, J., Kumar, S.S., Schrijen, G.-J., Tuyls, P.: FPGA intrinsic pUFs and
their use for IP protection. In: Paillier, P., Verbauwhede, I. (eds.) CHES 2007.
LNCS, vol. 4727, pp. 63–80. Springer, Heidelberg (2007)
6. Holcomb, D.E., Burleson, W.P., Fu, K.: Power-up SRAM State as an Identifying
Fingerprint and Source of True Random Numbers. IEEE Transactions on Computers (2009)
7. Holcomb, D.E., Fu, K.: Bitline PUF: building native challenge-response PUF capability into any SRAM. In: Batina, L., Robshaw, M. (eds.) CHES 2014. LNCS,
vol. 8731, pp. 510–526. Springer, Heidelberg (2014)
8. Lim, D. Extracting secret keys from integrated circuits, MSc Thesis (2004)
9. Lofstrom, K., Daasch, W.R., Taylor, D.: Ic identification circuit using device mismatch. In: 2000 IEEE International Solid-State Circuits Conference, Digest of Technical Papers, ISSCC 2000, pp. 372–373. IEEE (2000)
10. Majzoobi, M., Koushanfar, F., Potkonjak, M.: Lightweight secure PUFs. In: Proceedings of the 2008 IEEE/ACM International Conference on Computer-Aided
Design, pp. 670–673. IEEE Press (2008)
11. Pappu, R., Recht, B., Taylor, J., Gershenfeld, N.: Physical one-way functions. Science 297 5589, 2026–2030 (2002)
12. R¨
uhrmair, U., Holcomb, D.E.: PUFs at a glance. In: Proceedings of the conference
on Design, Automation & Test in Europe, p. 347. European Design and Automation
Association (2014)
13. R¨
uhrmair, U., Sehnke, F., S¨
olter, J., Dror, G., Devadas, S., Schmidhuber, J.: Modeling attacks on physical unclonable functions. In: Proceedings of the 17th ACM
Conference on Computer and Communications Security, pp. 237–249. ACM (2010)
14. R¨
uhrmair, U., S¨
olter, J., Sehnke, F., Xu, X., Mahmoud, A., Stoyanova, V., Dror, G.,
Schmidhuber, J., Burleson, W., Devadas, S.: PUF modeling attacks on simulated
and silicon data. IEEE Transactions on Information Forensics and Security (2013)

16

X. Xu et al.

15. R¨
uhrmair, U., Xu, X., S¨
olter, J., Mahmoud, A., Majzoobi, M., Koushanfar, F.,
Burleson, W.: Efficient power and timing side channels for physical unclonable
functions. In: Batina, L., Robshaw, M. (eds.) CHES 2014. LNCS, vol. 8731,
pp. 476–492. Springer, Heidelberg (2014)
16. Schuster, D., Hesselbarth, R.: Evaluation of bistable ring pUFs using single layer
neural networks. In: Holz, T., Ioannidis, S. (eds.) Trust 2014. LNCS, vol. 8564,
pp. 101–109. Springer, Heidelberg (2014)
17. Suh, G.E., Devadas, S.: Physical unclonable functions for device authentication
and secret key generation. In: Proceedings of the 44th Annual Design Automation
Conference, DAC 2007, pp. 9–14. ACM (2007)
18. Sun Electronic Systems, I. Model EC1X Environmental Chamber User and Repair
Manual (2011)
19. Tajik, S., Dietz, E., Frohmann, S., Seifert, J.-P., Nedospasov, D., Helfmeier, C.,
Boit, C., Dittrich, H.: Physical characterization of arbiter PUFs. In: Batina,
L., Robshaw, M. (eds.) CHES 2014. LNCS, vol. 8731, pp. 493–509. Springer,
Heidelberg (2014)
20. Van Dijk, M.E.: System and method of reliable forward secret key sharing with
physical random functions, US Patent 7,653,197, January 26, 2010
21. Xu, X., Burleson, W.: Hybrid side-channel/machine-learning attacks on PUFs: a
new threat. In: Proceedings of the Conference on Design, Automation & Test in
Europe, p. 349. European Design and Automation Association (2014)
22. Yamamoto, D., Takenaka, M., Sakiyama, K., Torii, N.: Security evaluation of
bistable ring PUFs on FPGAs using differential and linear analysis. In: Federated
Conference on Computer Science and Information Systems (FedCSIS), pp. 911–918.
IEEE (2014)

On the Scaling of Machine Learning Attacks
on PUFs with Application to Noise Bifurcation
Johannes Tobisch and Georg T. Becker
Horst Görtz Institute for IT Security
Ruhr-University Bochum, Germany

Abstract. Physical Unclonable Functions (PUFs) are seen as a promising alternative to traditional cryptographic algorithms for secure and
lightweight device authentication. However, most strong PUF proposals can be attacked using machine learning algorithms in which a precise software model of the PUF is determined. One of the most popular
strong PUFs is the XOR Arbiter PUF. In this paper, we examine the
machine learning resistance of the XOR Arbiter PUF by replicating the
attack by Rührmaier et al. from CCS 2010. Using a more efficient implementation we are able to confirm the predicted exponential increase in
needed number of responses for increasing XORs. However, our results
show that the machine learning performance does not only depend on the
PUF design and and the number of used response bits, but also on the
specific PUF instance under attack. This is an important observation for
machine learning attacks on PUFs in general. This instance-dependent
behavior makes it difficult to determine precise lower bounds of the required number of challenge and response pairs (CRPs) and hence such
numbers should always be treated with caution.
Furthermore, we examine a machine learning countermeasure called
noise bifurcation that was recently introduced at HOST 2014. In noise
bifurcation, the machine learning resistance of XOR Arbiter PUFs is
increased at the cost of using more responses during the authentication
process. However, we show that noise bifurcation has a much smaller
impact on the machine learning resistance than the results from HOST
2014 suggest.
Keywords: Physical Unclonable Function, Machine Learning Attacks,
Arbiter PUF, Noise-Bifurcation.

1

Introduction

Physical Unclonable Functions have gained a lot of research interest in the last
years. In Physical Unclonable Functions (PUFs), the inherent process variations
of a computer chip are used to give every chip a unique and unclonable identity.
The first electrical PUF, the Arbiter PUF, was introduced by Gassend et al. in
2002 [1]. A challenge is sent to the Arbiter PUF and each PUF instance answers
with a unique response. This way the Arbiter PUF can be used in a simple
c Springer International Publishing Switzerland 2015

S. Mangard, P. Schaumont (Eds.): RFIDsec 2015, LNCS 9440, pp. 17–31, 2015.
DOI: 10.1007/978-3-319-24837-0_2

18

J. Tobisch and G.T. Becker

authentication protocol. The secret information in a PUF are the process variations of the chips and not a digital key. In theory it should not be possible to
create an exact clone of a PUF and hence it should be “unclonable”. However,
for existing electrical strong PUFs such as the Arbiter PUF it is possible to
model the PUF in software. The parameters needed for such a software model
can be approximated using machine learning techniques given enough challenge
and response pairs (CRPs). Different variants of the Arbiter PUF have been
proposed to increase the resistance against machine learning attacks, e.g., the
Feed-Forward Arbiter PUF [2], the controlled PUF [3], the XOR Arbiter PUF [4]
and the Lightweight PUF [5]. From these solutions the XOR Arbiter PUF has
gained the most attention. In an XOR Arbiter PUF, the responses of several
Arbiter PUFs are XORed to increase the machine learning complexity. However,
while there are many papers discussing PUFs, relatively few directly examine
machine learning algorithms against PUFs. The most prominent work on machine learning attacks on PUFs is the 2010 CCS paper by Rührmaier et al. [6].
They demonstrated that XOR Arbiter PUFs can be attacked using a Logistic
Regression based machine learning algorithm. The initial results were based on
simulated data, but follow-up work using silicon data confirmed the simulation
results [7].
While their results showed that XOR Arbiter PUFs can be attacked using
Logistic Regression, they also showed that the required number of responses
grows exponentially with the number of XORs. Hence, in theory it would be
possible to build an XOR Arbiter PUF that is resistant against logistic regression
machine learning attacks if the PUF parameters are large enough. However, large
PUF parameters contradict the lightweight nature of PUFs and each additional
XOR increases the unreliability of the PUF. Hence, the number of XORs that
can be used is limited in practice. In addition to different PUF constructs, several
PUF protocols based on Arbiter PUFs have been proposed, such as the reverse
fuzzy extractor protocol [8] or the Slender PUF protocol [9]. A good analysis of
different PUF protocols and their weaknesses can be found in [10].
Recently, at HOST 2014, a new machine learning countermeasure called noise
bifurcation was introduced by Yu et al. [11]. In noise bifurcation it is assumed
that during the set-up phase a verifier builds a precise software model of the
PUF that can be used to authenticate the PUF. At the cost of increasing the
number of needed response bits, the machine learning resistance is increased
without adding additional XORs, unreliability or area overhead, which makes
this technique very interesting.
1.1

Our Contribution

In this paper we use an efficient implementation of the Logistic Regression (LR)
machine learning algorithm to replicate the results presented by Rührmair et al.
for PUFs with a larger number of XORs. Our results suggest that the exponential increase in CRPs needed for a LR machine learning attack claimed by
Rührmair et al. holds for larger PUF instances. However, our results show that
the success probability of a machine learning attack does not only depend on

On the Scaling of Machine Learning Attacks on PUFs with Application

19

the PUF parameters and the number of responses. Instead, our results suggest
that some PUF instances are easier to attack using machine learning than others. This makes it very difficult to make precise statements about the machine
learning resistance of a PUF, since some PUF instances might be less resistant
against machine learning attacks than others. This is an important observation
that should be kept in mind when evaluating results of machine learning attacks.
Furthermore, we used our efficient implementation to examine the machine
learning complexity of the noise bifurcation technique. Our results are in contrast
to the results presented in [11]. While the machine learning complexity increases
when noise bifurcation is used, we show that the increase due to noise bifurcation
is significantly less than the results in [11] suggest.

2

Background

The main idea behind Arbiter PUFs is that the performance of every CMOS
gate is slightly different, even if the exact same layout and mask are used. Hence,
every CMOS gate will have slightly different timing characteristics. In the Arbiter
PUF, these timing differences are used to generate a device-specific response for
a given challenge. The idea is to apply a race signal to two identical signal paths.
Which of the two paths is faster is determined only by the process variations,
which are unique for every device. Both paths end in an arbiter, which generates
the response by determining which of the two signals arrived first. In order to
make the paths depend on a challenge, both signals are sent through a set of
delay stages. Each of these delay stages can be configured to take two different
paths. If the challenge bit for a delay stage equals zero, both signals are passed
to their corresponding outputs. If the challenge bit of a delay stage is set to one,
both signals are crossed. Figure 1 shows an example Arbiter PUF consisting of
four stages. Each of the delay stages is realized in practice using two multiplexers
as depicted in Figure 2.

Fig. 1. Schematic of a four-stage Arbiter PUF

2.1

Modeling an Arbiter PUF

If the internal parameters of the PUF are known it is possible to build a precise
software model of an Arbiter PUF. Each delay stage adds a delay to the two

20

J. Tobisch and G.T. Becker

Fig. 2. Schematic of a single stage of an Arbiter PUF for both possible cases, ci = 1
and ci = 0

race signals. Since we are only interested in the relative delays between the top
and bottom signal, we do not need the delay added to each signal but only the
added delay difference between the top and bottom signal. A dela